Jan 31 16:30:12 crc systemd[1]: Starting Kubernetes Kubelet... Jan 31 16:30:12 crc restorecon[4566]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:12 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 16:30:13 crc restorecon[4566]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 16:30:13 crc restorecon[4566]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 31 16:30:14 crc kubenswrapper[4730]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 16:30:14 crc kubenswrapper[4730]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 31 16:30:14 crc kubenswrapper[4730]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 16:30:14 crc kubenswrapper[4730]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 16:30:14 crc kubenswrapper[4730]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 31 16:30:14 crc kubenswrapper[4730]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.191775 4730 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200215 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200257 4730 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200268 4730 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200277 4730 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200286 4730 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200295 4730 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200304 4730 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200313 4730 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200324 4730 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200335 4730 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200345 4730 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200354 4730 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200364 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200372 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200380 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200390 4730 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200399 4730 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200408 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200416 4730 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200425 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200437 4730 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200449 4730 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200459 4730 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200468 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200477 4730 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200485 4730 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200494 4730 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200502 4730 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200510 4730 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200518 4730 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200526 4730 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200541 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200550 4730 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200562 4730 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200573 4730 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200584 4730 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200594 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200605 4730 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200614 4730 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200623 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200634 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200644 4730 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200653 4730 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200663 4730 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200672 4730 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200684 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200693 4730 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200704 4730 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200712 4730 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200721 4730 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200729 4730 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200737 4730 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200745 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200755 4730 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200763 4730 feature_gate.go:330] unrecognized feature gate: Example Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200771 4730 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200779 4730 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200788 4730 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200796 4730 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200836 4730 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200844 4730 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200853 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200861 4730 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200869 4730 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200877 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200886 4730 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200894 4730 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200903 4730 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200911 4730 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200919 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.200927 4730 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201082 4730 flags.go:64] FLAG: --address="0.0.0.0" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201101 4730 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201115 4730 flags.go:64] FLAG: --anonymous-auth="true" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201128 4730 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201140 4730 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201150 4730 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201163 4730 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201175 4730 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201186 4730 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201195 4730 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201206 4730 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201218 4730 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201229 4730 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201239 4730 flags.go:64] FLAG: --cgroup-root="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201249 4730 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201258 4730 flags.go:64] FLAG: --client-ca-file="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201268 4730 flags.go:64] FLAG: --cloud-config="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201277 4730 flags.go:64] FLAG: --cloud-provider="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201287 4730 flags.go:64] FLAG: --cluster-dns="[]" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201298 4730 flags.go:64] FLAG: --cluster-domain="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201308 4730 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201318 4730 flags.go:64] FLAG: --config-dir="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201327 4730 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201338 4730 flags.go:64] FLAG: --container-log-max-files="5" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201350 4730 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201360 4730 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201370 4730 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201380 4730 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201389 4730 flags.go:64] FLAG: --contention-profiling="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201400 4730 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201409 4730 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201419 4730 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201429 4730 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201440 4730 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201450 4730 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201459 4730 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201469 4730 flags.go:64] FLAG: --enable-load-reader="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201478 4730 flags.go:64] FLAG: --enable-server="true" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201488 4730 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201499 4730 flags.go:64] FLAG: --event-burst="100" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201509 4730 flags.go:64] FLAG: --event-qps="50" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201519 4730 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201528 4730 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201538 4730 flags.go:64] FLAG: --eviction-hard="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201550 4730 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201560 4730 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201570 4730 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201581 4730 flags.go:64] FLAG: --eviction-soft="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201590 4730 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201600 4730 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201612 4730 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201622 4730 flags.go:64] FLAG: --experimental-mounter-path="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201631 4730 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201641 4730 flags.go:64] FLAG: --fail-swap-on="true" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201650 4730 flags.go:64] FLAG: --feature-gates="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201662 4730 flags.go:64] FLAG: --file-check-frequency="20s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201672 4730 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201682 4730 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201692 4730 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201702 4730 flags.go:64] FLAG: --healthz-port="10248" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201712 4730 flags.go:64] FLAG: --help="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201721 4730 flags.go:64] FLAG: --hostname-override="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201731 4730 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201741 4730 flags.go:64] FLAG: --http-check-frequency="20s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201750 4730 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201760 4730 flags.go:64] FLAG: --image-credential-provider-config="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201769 4730 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201779 4730 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201789 4730 flags.go:64] FLAG: --image-service-endpoint="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201798 4730 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201834 4730 flags.go:64] FLAG: --kube-api-burst="100" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201843 4730 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201855 4730 flags.go:64] FLAG: --kube-api-qps="50" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201865 4730 flags.go:64] FLAG: --kube-reserved="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201875 4730 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201884 4730 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201895 4730 flags.go:64] FLAG: --kubelet-cgroups="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201905 4730 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201914 4730 flags.go:64] FLAG: --lock-file="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201924 4730 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201933 4730 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201945 4730 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201970 4730 flags.go:64] FLAG: --log-json-split-stream="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201981 4730 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.201991 4730 flags.go:64] FLAG: --log-text-split-stream="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202000 4730 flags.go:64] FLAG: --logging-format="text" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202010 4730 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202020 4730 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202030 4730 flags.go:64] FLAG: --manifest-url="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202040 4730 flags.go:64] FLAG: --manifest-url-header="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202052 4730 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202062 4730 flags.go:64] FLAG: --max-open-files="1000000" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202074 4730 flags.go:64] FLAG: --max-pods="110" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202083 4730 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202093 4730 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202103 4730 flags.go:64] FLAG: --memory-manager-policy="None" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202113 4730 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202123 4730 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202132 4730 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202142 4730 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202162 4730 flags.go:64] FLAG: --node-status-max-images="50" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202172 4730 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202182 4730 flags.go:64] FLAG: --oom-score-adj="-999" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202192 4730 flags.go:64] FLAG: --pod-cidr="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202202 4730 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202215 4730 flags.go:64] FLAG: --pod-manifest-path="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202225 4730 flags.go:64] FLAG: --pod-max-pids="-1" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202234 4730 flags.go:64] FLAG: --pods-per-core="0" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202245 4730 flags.go:64] FLAG: --port="10250" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202255 4730 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202264 4730 flags.go:64] FLAG: --provider-id="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202275 4730 flags.go:64] FLAG: --qos-reserved="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202284 4730 flags.go:64] FLAG: --read-only-port="10255" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202294 4730 flags.go:64] FLAG: --register-node="true" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202303 4730 flags.go:64] FLAG: --register-schedulable="true" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202313 4730 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202328 4730 flags.go:64] FLAG: --registry-burst="10" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202338 4730 flags.go:64] FLAG: --registry-qps="5" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202348 4730 flags.go:64] FLAG: --reserved-cpus="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202358 4730 flags.go:64] FLAG: --reserved-memory="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202370 4730 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202380 4730 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202391 4730 flags.go:64] FLAG: --rotate-certificates="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202400 4730 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202411 4730 flags.go:64] FLAG: --runonce="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202421 4730 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202432 4730 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202442 4730 flags.go:64] FLAG: --seccomp-default="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202452 4730 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202462 4730 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202472 4730 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202481 4730 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202492 4730 flags.go:64] FLAG: --storage-driver-password="root" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202501 4730 flags.go:64] FLAG: --storage-driver-secure="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202511 4730 flags.go:64] FLAG: --storage-driver-table="stats" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202521 4730 flags.go:64] FLAG: --storage-driver-user="root" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202530 4730 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202540 4730 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202551 4730 flags.go:64] FLAG: --system-cgroups="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202560 4730 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202575 4730 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202584 4730 flags.go:64] FLAG: --tls-cert-file="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202594 4730 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202606 4730 flags.go:64] FLAG: --tls-min-version="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202615 4730 flags.go:64] FLAG: --tls-private-key-file="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202625 4730 flags.go:64] FLAG: --topology-manager-policy="none" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202635 4730 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202644 4730 flags.go:64] FLAG: --topology-manager-scope="container" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202654 4730 flags.go:64] FLAG: --v="2" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202672 4730 flags.go:64] FLAG: --version="false" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202683 4730 flags.go:64] FLAG: --vmodule="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202694 4730 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.202705 4730 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.202983 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.202997 4730 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203012 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203024 4730 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203034 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203045 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203054 4730 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203063 4730 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203072 4730 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203080 4730 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203089 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203097 4730 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203106 4730 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203115 4730 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203123 4730 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203133 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203145 4730 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203159 4730 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203174 4730 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203183 4730 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203192 4730 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203201 4730 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203210 4730 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203218 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203226 4730 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203234 4730 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203243 4730 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203252 4730 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203260 4730 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203269 4730 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203277 4730 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203286 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203294 4730 feature_gate.go:330] unrecognized feature gate: Example Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203302 4730 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203311 4730 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203321 4730 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203353 4730 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203364 4730 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203374 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203383 4730 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203391 4730 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203399 4730 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203408 4730 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203416 4730 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203431 4730 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203442 4730 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203452 4730 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203461 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203470 4730 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203478 4730 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203487 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203496 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203507 4730 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203518 4730 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203528 4730 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203537 4730 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203546 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203555 4730 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203564 4730 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203572 4730 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203583 4730 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203593 4730 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203602 4730 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203612 4730 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203621 4730 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203630 4730 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203638 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203647 4730 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203656 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203666 4730 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.203676 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.203703 4730 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.217638 4730 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.217678 4730 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217832 4730 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217848 4730 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217860 4730 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217870 4730 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217881 4730 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217892 4730 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217904 4730 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217913 4730 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217922 4730 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217930 4730 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217939 4730 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217947 4730 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217956 4730 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217964 4730 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217972 4730 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217980 4730 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217988 4730 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.217997 4730 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218004 4730 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218012 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218022 4730 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218031 4730 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218040 4730 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218048 4730 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218056 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218066 4730 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218077 4730 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218086 4730 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218094 4730 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218103 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218111 4730 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218119 4730 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218129 4730 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218140 4730 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218149 4730 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218158 4730 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218167 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218175 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218184 4730 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218193 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218201 4730 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218210 4730 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218219 4730 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218227 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218234 4730 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218242 4730 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218249 4730 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218257 4730 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218265 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218273 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218280 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218288 4730 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218295 4730 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218303 4730 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218310 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218318 4730 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218325 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218333 4730 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218341 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218349 4730 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218357 4730 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218364 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218371 4730 feature_gate.go:330] unrecognized feature gate: Example Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218379 4730 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218415 4730 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218428 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218438 4730 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218447 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218454 4730 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218462 4730 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218470 4730 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.218482 4730 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218711 4730 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218721 4730 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218733 4730 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218744 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218753 4730 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218761 4730 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218769 4730 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218778 4730 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218787 4730 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218795 4730 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218828 4730 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218836 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218844 4730 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218854 4730 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218865 4730 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218873 4730 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218884 4730 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218893 4730 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218902 4730 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218910 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218917 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218925 4730 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218932 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218940 4730 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218948 4730 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218955 4730 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218963 4730 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218970 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218978 4730 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218987 4730 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.218994 4730 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219002 4730 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219010 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219018 4730 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219027 4730 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219034 4730 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219042 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219050 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219057 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219065 4730 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219073 4730 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219080 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219088 4730 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219096 4730 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219104 4730 feature_gate.go:330] unrecognized feature gate: Example Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219112 4730 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219119 4730 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219127 4730 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219135 4730 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219143 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219150 4730 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219158 4730 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219165 4730 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219173 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219180 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219188 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219195 4730 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219203 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219211 4730 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219218 4730 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219226 4730 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219234 4730 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219244 4730 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219252 4730 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219261 4730 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219271 4730 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219280 4730 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219288 4730 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219295 4730 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219303 4730 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.219311 4730 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.219322 4730 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.219606 4730 server.go:940] "Client rotation is on, will bootstrap in background" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.225010 4730 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.225135 4730 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.226777 4730 server.go:997] "Starting client certificate rotation" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.226842 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.227746 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-27 11:01:19.906329027 +0000 UTC Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.227891 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.256652 4730 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 16:30:14 crc kubenswrapper[4730]: E0131 16:30:14.260316 4730 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.260663 4730 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.277057 4730 log.go:25] "Validated CRI v1 runtime API" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.318837 4730 log.go:25] "Validated CRI v1 image API" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.321139 4730 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.329408 4730 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-31-16-25-19-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.329458 4730 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.349472 4730 manager.go:217] Machine: {Timestamp:2026-01-31 16:30:14.347380924 +0000 UTC m=+1.153437920 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:04f37162-2d97-4238-903e-03a07bd637ec BootID:fd417392-7b12-4953-b7d4-8fe09595e010 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:27:51:a0 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:27:51:a0 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:cd:72:e2 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:7c:bf:c1 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e1:84:f9 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a3:bb:79 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2a:ee:73:0d:45:b5 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:12:fa:54:83:be:bc Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.349792 4730 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.350004 4730 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.350473 4730 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.350761 4730 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.350842 4730 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.351148 4730 topology_manager.go:138] "Creating topology manager with none policy" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.351164 4730 container_manager_linux.go:303] "Creating device plugin manager" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.351649 4730 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.351698 4730 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.352172 4730 state_mem.go:36] "Initialized new in-memory state store" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.352679 4730 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.361476 4730 kubelet.go:418] "Attempting to sync node with API server" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.361537 4730 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.361567 4730 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.361593 4730 kubelet.go:324] "Adding apiserver pod source" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.361615 4730 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.368218 4730 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.369380 4730 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.370895 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 31 16:30:14 crc kubenswrapper[4730]: E0131 16:30:14.371016 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.370991 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 31 16:30:14 crc kubenswrapper[4730]: E0131 16:30:14.371116 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.372210 4730 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.374312 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.374366 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.374398 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.374415 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.374445 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.374462 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.374480 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.374507 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.374528 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.374547 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.374570 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.374590 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.375937 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.376749 4730 server.go:1280] "Started kubelet" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.378120 4730 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.378579 4730 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 31 16:30:14 crc systemd[1]: Started Kubernetes Kubelet. Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.379105 4730 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.379102 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.387358 4730 server.go:460] "Adding debug handlers to kubelet server" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.388961 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.389245 4730 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 31 16:30:14 crc kubenswrapper[4730]: E0131 16:30:14.387173 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.64:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fddc4720d8231 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 16:30:14.376710705 +0000 UTC m=+1.182767691,LastTimestamp:2026-01-31 16:30:14.376710705 +0000 UTC m=+1.182767691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.390065 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 17:55:35.163906773 +0000 UTC Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.390175 4730 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.391098 4730 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.390192 4730 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 31 16:30:14 crc kubenswrapper[4730]: E0131 16:30:14.390762 4730 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 16:30:14 crc kubenswrapper[4730]: E0131 16:30:14.392845 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="200ms" Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.392791 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 31 16:30:14 crc kubenswrapper[4730]: E0131 16:30:14.393213 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.398328 4730 factory.go:153] Registering CRI-O factory Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.398367 4730 factory.go:221] Registration of the crio container factory successfully Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.400010 4730 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.400155 4730 factory.go:55] Registering systemd factory Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.400187 4730 factory.go:221] Registration of the systemd container factory successfully Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.400231 4730 factory.go:103] Registering Raw factory Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.400260 4730 manager.go:1196] Started watching for new ooms in manager Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.401600 4730 manager.go:319] Starting recovery of all containers Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.412530 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.412614 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.412647 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.412674 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.412700 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.412723 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.412747 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.412773 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.412842 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.412873 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.412900 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.412924 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.412947 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.412978 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.413000 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.413026 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.413049 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.413075 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.413103 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.413128 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.413152 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419335 4730 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419394 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419418 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419436 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419454 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419473 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419496 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419516 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419534 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419552 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419569 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419590 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419608 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419625 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419644 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419662 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419679 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419698 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419718 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419736 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419754 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419772 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419790 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419872 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419902 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419946 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.419972 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420022 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420047 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420070 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420097 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420123 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420155 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420182 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420208 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420238 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420268 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420293 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420317 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420346 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420370 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420396 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420422 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420449 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420474 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420496 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420522 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420545 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420571 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420596 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420620 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420643 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420668 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420692 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420716 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420743 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420770 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420793 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420855 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420882 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420905 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420930 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420957 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.420984 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421011 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421038 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421063 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421090 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421115 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421144 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421170 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421199 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421226 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421252 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421277 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421306 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421334 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421360 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421391 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421418 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421445 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421473 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421498 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421525 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421574 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421606 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421634 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421667 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421695 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421724 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421750 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421781 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421854 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421884 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421911 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421936 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421961 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.421987 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422013 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422038 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422070 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422094 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422120 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422145 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422168 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422193 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422221 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422247 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422272 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422295 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422322 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422347 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422371 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422397 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422426 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422452 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422499 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422525 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422550 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422577 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422601 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422623 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422649 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422676 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422702 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422729 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422754 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422779 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422858 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422883 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422909 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422936 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422967 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.422995 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423020 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423046 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423072 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423097 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423135 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423162 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423186 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423212 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423238 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423261 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423288 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423311 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423335 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423359 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423383 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423406 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423431 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423456 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423482 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423508 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423536 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423563 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423588 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423612 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423639 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423665 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423692 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423717 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423742 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423769 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423890 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423927 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423954 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.423985 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424047 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424076 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424105 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424134 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424160 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424185 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424210 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424234 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424262 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424291 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424315 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424332 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424350 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424367 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424387 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424405 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424421 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424439 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424458 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424476 4730 reconstruct.go:97] "Volume reconstruction finished" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.424487 4730 reconciler.go:26] "Reconciler: start to sync state" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.429882 4730 manager.go:324] Recovery completed Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.439457 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.444095 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.444139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.444158 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.448434 4730 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.448462 4730 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.448572 4730 state_mem.go:36] "Initialized new in-memory state store" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.452833 4730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.456379 4730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.459746 4730 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.459949 4730 kubelet.go:2335] "Starting kubelet main sync loop" Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.460636 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 31 16:30:14 crc kubenswrapper[4730]: E0131 16:30:14.462662 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 31 16:30:14 crc kubenswrapper[4730]: E0131 16:30:14.462959 4730 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.468266 4730 policy_none.go:49] "None policy: Start" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.468945 4730 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.469061 4730 state_mem.go:35] "Initializing new in-memory state store" Jan 31 16:30:14 crc kubenswrapper[4730]: E0131 16:30:14.491863 4730 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.523826 4730 manager.go:334] "Starting Device Plugin manager" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.525614 4730 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.525649 4730 server.go:79] "Starting device plugin registration server" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.526085 4730 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.526190 4730 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.526687 4730 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.526918 4730 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.527003 4730 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 31 16:30:14 crc kubenswrapper[4730]: E0131 16:30:14.538108 4730 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.563318 4730 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.563395 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.564223 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.564249 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.564256 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.564343 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.564633 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.564697 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.565567 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.565618 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.565658 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.565843 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.565981 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.566011 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.566647 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.566678 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.566691 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.567038 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.567058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.567069 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.567077 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.567103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.567113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.567258 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.567347 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.567377 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.568129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.568174 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.568185 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.568213 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.568244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.568257 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.568384 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.568558 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.568626 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.569119 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.569139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.569150 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.569318 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.569344 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.570573 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.570602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.570614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.570719 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.570742 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.570776 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:14 crc kubenswrapper[4730]: E0131 16:30:14.593998 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="400ms" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.626658 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.626748 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.626775 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.626852 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.626875 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.626921 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.627025 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.627525 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.627594 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.627616 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.627636 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.627708 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.627843 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.627879 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.627929 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.627950 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.628195 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.628222 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.628254 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.628281 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 16:30:14 crc kubenswrapper[4730]: E0131 16:30:14.628935 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.730609 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.730686 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.730730 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.730771 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.730825 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.730860 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.730894 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.730924 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.730957 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.730985 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731013 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731042 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731061 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731194 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731240 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731119 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731317 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731370 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731366 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731422 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731461 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731445 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731511 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731546 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731559 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731586 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731600 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731625 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731425 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.731762 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.829087 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.831330 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.831498 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.831654 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.831788 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 16:30:14 crc kubenswrapper[4730]: E0131 16:30:14.833039 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.899927 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.911889 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.929377 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.937061 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: I0131 16:30:14.938995 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.965799 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f10ec55265fd8554122b3f44a964ee99052f885934d74fe4f9b9b2fd4a3504b0 WatchSource:0}: Error finding container f10ec55265fd8554122b3f44a964ee99052f885934d74fe4f9b9b2fd4a3504b0: Status 404 returned error can't find the container with id f10ec55265fd8554122b3f44a964ee99052f885934d74fe4f9b9b2fd4a3504b0 Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.968276 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-7b050d79c966302e7cb7ed9d311d6ae94b7be0250b732b682c6e326e3b8f14e5 WatchSource:0}: Error finding container 7b050d79c966302e7cb7ed9d311d6ae94b7be0250b732b682c6e326e3b8f14e5: Status 404 returned error can't find the container with id 7b050d79c966302e7cb7ed9d311d6ae94b7be0250b732b682c6e326e3b8f14e5 Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.981996 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-82324092cfeb2e35570124e6d8f049e01e189dbf95c82b39d7d9db41eada16c1 WatchSource:0}: Error finding container 82324092cfeb2e35570124e6d8f049e01e189dbf95c82b39d7d9db41eada16c1: Status 404 returned error can't find the container with id 82324092cfeb2e35570124e6d8f049e01e189dbf95c82b39d7d9db41eada16c1 Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.983321 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-757c044ad7cb2de89a94654dac18af1d04bf9d29f1d1a2d3d6e0c244dacf995b WatchSource:0}: Error finding container 757c044ad7cb2de89a94654dac18af1d04bf9d29f1d1a2d3d6e0c244dacf995b: Status 404 returned error can't find the container with id 757c044ad7cb2de89a94654dac18af1d04bf9d29f1d1a2d3d6e0c244dacf995b Jan 31 16:30:14 crc kubenswrapper[4730]: W0131 16:30:14.983909 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-cb23aa5549dc7684a35ad4307c6d11a4178ab1c2a265b06da5b77b42b1d69f6d WatchSource:0}: Error finding container cb23aa5549dc7684a35ad4307c6d11a4178ab1c2a265b06da5b77b42b1d69f6d: Status 404 returned error can't find the container with id cb23aa5549dc7684a35ad4307c6d11a4178ab1c2a265b06da5b77b42b1d69f6d Jan 31 16:30:14 crc kubenswrapper[4730]: E0131 16:30:14.995420 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="800ms" Jan 31 16:30:15 crc kubenswrapper[4730]: W0131 16:30:15.199970 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 31 16:30:15 crc kubenswrapper[4730]: E0131 16:30:15.200082 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 31 16:30:15 crc kubenswrapper[4730]: I0131 16:30:15.233863 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:15 crc kubenswrapper[4730]: I0131 16:30:15.234740 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:15 crc kubenswrapper[4730]: I0131 16:30:15.234765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:15 crc kubenswrapper[4730]: I0131 16:30:15.234773 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:15 crc kubenswrapper[4730]: I0131 16:30:15.234792 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 16:30:15 crc kubenswrapper[4730]: E0131 16:30:15.235118 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Jan 31 16:30:15 crc kubenswrapper[4730]: I0131 16:30:15.380095 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 31 16:30:15 crc kubenswrapper[4730]: I0131 16:30:15.391245 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:45:02.395762268 +0000 UTC Jan 31 16:30:15 crc kubenswrapper[4730]: W0131 16:30:15.398051 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 31 16:30:15 crc kubenswrapper[4730]: E0131 16:30:15.398153 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 31 16:30:15 crc kubenswrapper[4730]: I0131 16:30:15.466133 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"757c044ad7cb2de89a94654dac18af1d04bf9d29f1d1a2d3d6e0c244dacf995b"} Jan 31 16:30:15 crc kubenswrapper[4730]: I0131 16:30:15.466872 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"82324092cfeb2e35570124e6d8f049e01e189dbf95c82b39d7d9db41eada16c1"} Jan 31 16:30:15 crc kubenswrapper[4730]: I0131 16:30:15.467680 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7b050d79c966302e7cb7ed9d311d6ae94b7be0250b732b682c6e326e3b8f14e5"} Jan 31 16:30:15 crc kubenswrapper[4730]: I0131 16:30:15.468554 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f10ec55265fd8554122b3f44a964ee99052f885934d74fe4f9b9b2fd4a3504b0"} Jan 31 16:30:15 crc kubenswrapper[4730]: I0131 16:30:15.469186 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cb23aa5549dc7684a35ad4307c6d11a4178ab1c2a265b06da5b77b42b1d69f6d"} Jan 31 16:30:15 crc kubenswrapper[4730]: E0131 16:30:15.796895 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="1.6s" Jan 31 16:30:15 crc kubenswrapper[4730]: W0131 16:30:15.958188 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 31 16:30:15 crc kubenswrapper[4730]: E0131 16:30:15.958354 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 31 16:30:15 crc kubenswrapper[4730]: W0131 16:30:15.997460 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 31 16:30:15 crc kubenswrapper[4730]: E0131 16:30:15.997530 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.036182 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.038418 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.038450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.038458 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.038480 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 16:30:16 crc kubenswrapper[4730]: E0131 16:30:16.039159 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.380562 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.391929 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 08:00:46.277436174 +0000 UTC Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.404121 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 16:30:16 crc kubenswrapper[4730]: E0131 16:30:16.405101 4730 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.472837 4730 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="569ab9a5bc1684f31b7c934785de94a803f30d7ea366e08f536f1e7acb5bdb66" exitCode=0 Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.472924 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.472946 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"569ab9a5bc1684f31b7c934785de94a803f30d7ea366e08f536f1e7acb5bdb66"} Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.474018 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.474060 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.474076 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.476299 4730 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="a18227efec307a6154703749b5e1dad41648745e260982a1d424c58dab97d912" exitCode=0 Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.476398 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.476577 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"a18227efec307a6154703749b5e1dad41648745e260982a1d424c58dab97d912"} Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.477045 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.477080 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.477097 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.479327 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.479625 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a"} Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.479648 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884"} Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.479657 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931"} Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.479664 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f"} Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.479914 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.479931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.479939 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.481730 4730 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5" exitCode=0 Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.481853 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.482134 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5"} Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.482444 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.482461 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.482469 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.483710 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054" exitCode=0 Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.483759 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.483769 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054"} Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.484636 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.484655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.484663 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.489321 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.490491 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.490519 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:16 crc kubenswrapper[4730]: I0131 16:30:16.490527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.379840 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.392119 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 14:26:34.029732162 +0000 UTC Jan 31 16:30:17 crc kubenswrapper[4730]: E0131 16:30:17.398601 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="3.2s" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.487816 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4ab70a9385676283881a5e8581eea0d5dc9f7a467b10e66ca34dc25efce6c712"} Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.487853 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"71180a847d6310a8c7bc6f33e0d092316b4927684618237542ff99951cc4bb46"} Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.487862 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"55c8d849c5465966f2f594e26b08dfd9894c2f0337bba1e90085896ab8d8c5e1"} Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.487938 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.488721 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.488750 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.488764 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.491148 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325"} Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.491182 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545"} Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.491195 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8"} Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.491209 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726"} Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.491220 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9"} Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.491314 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.492100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.492122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.492134 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.494220 4730 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3c9c7193da834d3d58d496446f4b26ddaba6e55eee7a386dd0e8e8c9a67e4aef" exitCode=0 Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.494271 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3c9c7193da834d3d58d496446f4b26ddaba6e55eee7a386dd0e8e8c9a67e4aef"} Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.494337 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.495850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.495875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.495923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.496392 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2505818d810a7e94e8b9705a3938c35e4911506d30ae620ea3fc35179d375a35"} Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.496406 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.496406 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.497264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.497300 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.497274 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.497312 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.497325 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.497337 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.639379 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.640283 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.640314 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.640322 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:17 crc kubenswrapper[4730]: I0131 16:30:17.640344 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 16:30:17 crc kubenswrapper[4730]: E0131 16:30:17.640710 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Jan 31 16:30:18 crc kubenswrapper[4730]: W0131 16:30:18.059449 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 31 16:30:18 crc kubenswrapper[4730]: E0131 16:30:18.059552 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.392881 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 07:34:22.288813482 +0000 UTC Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.501665 4730 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="22091e9f8e205f4abe02d46b2dccf3c86d4e9171f3ddce551bed190e4abf5e04" exitCode=0 Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.501723 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"22091e9f8e205f4abe02d46b2dccf3c86d4e9171f3ddce551bed190e4abf5e04"} Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.501854 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.501895 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.501904 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.501911 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.502891 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.502929 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.503502 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.503534 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.503535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.503548 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.503565 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.503502 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.503584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.503592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.503604 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.503569 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.503741 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.503754 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:18 crc kubenswrapper[4730]: I0131 16:30:18.806584 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:19 crc kubenswrapper[4730]: I0131 16:30:19.393415 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:13:19.69771784 +0000 UTC Jan 31 16:30:19 crc kubenswrapper[4730]: I0131 16:30:19.509785 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8654d12dcd8ad892ee6a5e4f0c0663c9b1040fc0120c47f7e85de62443934b01"} Jan 31 16:30:19 crc kubenswrapper[4730]: I0131 16:30:19.509910 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7a2e994cdaac0e7e168039fe280eb9849676bbb33e048590faeac4ea93cc9756"} Jan 31 16:30:19 crc kubenswrapper[4730]: I0131 16:30:19.509934 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"82c9501b3dd8b1374ffc2f3a6ac550539119be89530a0ab12d946bef8af73ad2"} Jan 31 16:30:19 crc kubenswrapper[4730]: I0131 16:30:19.509956 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bae4671f2a044112a884a087a077a8bc8f351dafc63bb183ef8c52305b32b245"} Jan 31 16:30:19 crc kubenswrapper[4730]: I0131 16:30:19.509850 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 16:30:19 crc kubenswrapper[4730]: I0131 16:30:19.510024 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:19 crc kubenswrapper[4730]: I0131 16:30:19.511355 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:19 crc kubenswrapper[4730]: I0131 16:30:19.511391 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:19 crc kubenswrapper[4730]: I0131 16:30:19.511404 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:19 crc kubenswrapper[4730]: I0131 16:30:19.702255 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.389101 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.389443 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.391314 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.391361 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.391380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.394147 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 07:47:19.839115668 +0000 UTC Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.522644 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.522660 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.522735 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.522921 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"33545d13e478eb3082cb6b534738ab7f69acf9167e21436ec47b6e48ccbeb4c4"} Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.523911 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.524005 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.524029 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.524265 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.524370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.524441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.758658 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.841152 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.842690 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.842760 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.842784 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.842893 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.971707 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.972007 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.973424 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.973476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:20 crc kubenswrapper[4730]: I0131 16:30:20.973495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:21 crc kubenswrapper[4730]: I0131 16:30:21.367673 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 31 16:30:21 crc kubenswrapper[4730]: I0131 16:30:21.394595 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 07:36:11.271631475 +0000 UTC Jan 31 16:30:21 crc kubenswrapper[4730]: I0131 16:30:21.525168 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:21 crc kubenswrapper[4730]: I0131 16:30:21.526386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:21 crc kubenswrapper[4730]: I0131 16:30:21.526471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:21 crc kubenswrapper[4730]: I0131 16:30:21.526504 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:21 crc kubenswrapper[4730]: I0131 16:30:21.978448 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 16:30:21 crc kubenswrapper[4730]: I0131 16:30:21.978791 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:21 crc kubenswrapper[4730]: I0131 16:30:21.980567 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:21 crc kubenswrapper[4730]: I0131 16:30:21.980689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:21 crc kubenswrapper[4730]: I0131 16:30:21.980711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:22 crc kubenswrapper[4730]: I0131 16:30:22.395242 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:32:43.152086332 +0000 UTC Jan 31 16:30:22 crc kubenswrapper[4730]: I0131 16:30:22.528595 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:22 crc kubenswrapper[4730]: I0131 16:30:22.530233 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:22 crc kubenswrapper[4730]: I0131 16:30:22.530317 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:22 crc kubenswrapper[4730]: I0131 16:30:22.530344 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:22 crc kubenswrapper[4730]: I0131 16:30:22.731028 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.210675 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.210908 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.213527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.213583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.213605 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.218445 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.389872 4730 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.389979 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.395624 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 08:36:10.855747081 +0000 UTC Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.531312 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.531523 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.532785 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.532857 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.532961 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.532971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.533045 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.533055 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.641487 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.735171 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.735468 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.737008 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.737075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:23 crc kubenswrapper[4730]: I0131 16:30:23.737100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:24 crc kubenswrapper[4730]: I0131 16:30:24.395842 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 04:29:33.306310121 +0000 UTC Jan 31 16:30:24 crc kubenswrapper[4730]: I0131 16:30:24.535752 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:24 crc kubenswrapper[4730]: I0131 16:30:24.537153 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:24 crc kubenswrapper[4730]: I0131 16:30:24.537218 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:24 crc kubenswrapper[4730]: I0131 16:30:24.537243 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:24 crc kubenswrapper[4730]: E0131 16:30:24.538264 4730 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 16:30:25 crc kubenswrapper[4730]: I0131 16:30:25.396793 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 06:10:45.026836078 +0000 UTC Jan 31 16:30:26 crc kubenswrapper[4730]: I0131 16:30:26.398293 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 20:22:30.194795018 +0000 UTC Jan 31 16:30:27 crc kubenswrapper[4730]: I0131 16:30:27.399299 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 06:24:59.831434461 +0000 UTC Jan 31 16:30:28 crc kubenswrapper[4730]: W0131 16:30:28.178415 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 31 16:30:28 crc kubenswrapper[4730]: I0131 16:30:28.178504 4730 trace.go:236] Trace[1845246171]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 16:30:18.177) (total time: 10001ms): Jan 31 16:30:28 crc kubenswrapper[4730]: Trace[1845246171]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (16:30:28.178) Jan 31 16:30:28 crc kubenswrapper[4730]: Trace[1845246171]: [10.001019912s] [10.001019912s] END Jan 31 16:30:28 crc kubenswrapper[4730]: E0131 16:30:28.178524 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 31 16:30:28 crc kubenswrapper[4730]: W0131 16:30:28.240265 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 31 16:30:28 crc kubenswrapper[4730]: I0131 16:30:28.240366 4730 trace.go:236] Trace[1391944066]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 16:30:18.239) (total time: 10001ms): Jan 31 16:30:28 crc kubenswrapper[4730]: Trace[1391944066]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:30:28.240) Jan 31 16:30:28 crc kubenswrapper[4730]: Trace[1391944066]: [10.001200938s] [10.001200938s] END Jan 31 16:30:28 crc kubenswrapper[4730]: E0131 16:30:28.240390 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 31 16:30:28 crc kubenswrapper[4730]: E0131 16:30:28.264128 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.188fddc4720d8231 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 16:30:14.376710705 +0000 UTC m=+1.182767691,LastTimestamp:2026-01-31 16:30:14.376710705 +0000 UTC m=+1.182767691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 16:30:28 crc kubenswrapper[4730]: I0131 16:30:28.381462 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 31 16:30:28 crc kubenswrapper[4730]: I0131 16:30:28.400153 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:35:25.470475883 +0000 UTC Jan 31 16:30:28 crc kubenswrapper[4730]: I0131 16:30:28.621997 4730 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 16:30:28 crc kubenswrapper[4730]: I0131 16:30:28.622119 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 16:30:28 crc kubenswrapper[4730]: I0131 16:30:28.632109 4730 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 16:30:28 crc kubenswrapper[4730]: I0131 16:30:28.632197 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 16:30:28 crc kubenswrapper[4730]: I0131 16:30:28.814759 4730 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]log ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]etcd ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/priority-and-fairness-filter ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/start-apiextensions-informers ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/start-apiextensions-controllers ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/crd-informer-synced ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/start-system-namespaces-controller ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 31 16:30:28 crc kubenswrapper[4730]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 31 16:30:28 crc kubenswrapper[4730]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/bootstrap-controller ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/start-kube-aggregator-informers ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/apiservice-registration-controller ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/apiservice-discovery-controller ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]autoregister-completion ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/apiservice-openapi-controller ok Jan 31 16:30:28 crc kubenswrapper[4730]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 31 16:30:28 crc kubenswrapper[4730]: livez check failed Jan 31 16:30:28 crc kubenswrapper[4730]: I0131 16:30:28.814871 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:30:29 crc kubenswrapper[4730]: I0131 16:30:29.401135 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:01:01.898888507 +0000 UTC Jan 31 16:30:30 crc kubenswrapper[4730]: I0131 16:30:30.401875 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 21:05:51.428466318 +0000 UTC Jan 31 16:30:30 crc kubenswrapper[4730]: I0131 16:30:30.979483 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:30:30 crc kubenswrapper[4730]: I0131 16:30:30.979763 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:30 crc kubenswrapper[4730]: I0131 16:30:30.981395 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:30 crc kubenswrapper[4730]: I0131 16:30:30.981528 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:30 crc kubenswrapper[4730]: I0131 16:30:30.981646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:31 crc kubenswrapper[4730]: I0131 16:30:31.401407 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 31 16:30:31 crc kubenswrapper[4730]: I0131 16:30:31.401983 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:31 crc kubenswrapper[4730]: I0131 16:30:31.402013 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:44:21.646523389 +0000 UTC Jan 31 16:30:31 crc kubenswrapper[4730]: I0131 16:30:31.406272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:31 crc kubenswrapper[4730]: I0131 16:30:31.407015 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:31 crc kubenswrapper[4730]: I0131 16:30:31.407122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:31 crc kubenswrapper[4730]: I0131 16:30:31.424360 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 31 16:30:31 crc kubenswrapper[4730]: I0131 16:30:31.470790 4730 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 16:30:31 crc kubenswrapper[4730]: I0131 16:30:31.555336 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:31 crc kubenswrapper[4730]: I0131 16:30:31.556624 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:31 crc kubenswrapper[4730]: I0131 16:30:31.556678 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:31 crc kubenswrapper[4730]: I0131 16:30:31.556696 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:32 crc kubenswrapper[4730]: I0131 16:30:32.403078 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:35:43.503882312 +0000 UTC Jan 31 16:30:32 crc kubenswrapper[4730]: I0131 16:30:32.786088 4730 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 16:30:33 crc kubenswrapper[4730]: I0131 16:30:33.390075 4730 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 16:30:33 crc kubenswrapper[4730]: I0131 16:30:33.390194 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 16:30:33 crc kubenswrapper[4730]: I0131 16:30:33.403828 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:28:15.21340257 +0000 UTC Jan 31 16:30:33 crc kubenswrapper[4730]: E0131 16:30:33.621478 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 31 16:30:33 crc kubenswrapper[4730]: I0131 16:30:33.623929 4730 trace.go:236] Trace[297418786]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 16:30:18.963) (total time: 14659ms): Jan 31 16:30:33 crc kubenswrapper[4730]: Trace[297418786]: ---"Objects listed" error: 14659ms (16:30:33.623) Jan 31 16:30:33 crc kubenswrapper[4730]: Trace[297418786]: [14.659983868s] [14.659983868s] END Jan 31 16:30:33 crc kubenswrapper[4730]: I0131 16:30:33.623966 4730 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 16:30:33 crc kubenswrapper[4730]: I0131 16:30:33.625134 4730 trace.go:236] Trace[1675242485]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 16:30:22.192) (total time: 11432ms): Jan 31 16:30:33 crc kubenswrapper[4730]: Trace[1675242485]: ---"Objects listed" error: 11432ms (16:30:33.624) Jan 31 16:30:33 crc kubenswrapper[4730]: Trace[1675242485]: [11.432693258s] [11.432693258s] END Jan 31 16:30:33 crc kubenswrapper[4730]: I0131 16:30:33.625184 4730 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 16:30:33 crc kubenswrapper[4730]: E0131 16:30:33.626470 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 31 16:30:33 crc kubenswrapper[4730]: I0131 16:30:33.627508 4730 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 31 16:30:33 crc kubenswrapper[4730]: I0131 16:30:33.652495 4730 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 16:30:33 crc kubenswrapper[4730]: I0131 16:30:33.690083 4730 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60514->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 31 16:30:33 crc kubenswrapper[4730]: I0131 16:30:33.690199 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60514->192.168.126.11:17697: read: connection reset by peer" Jan 31 16:30:33 crc kubenswrapper[4730]: I0131 16:30:33.736194 4730 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 16:30:33 crc kubenswrapper[4730]: I0131 16:30:33.736762 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 16:30:33 crc kubenswrapper[4730]: I0131 16:30:33.814283 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:33 crc kubenswrapper[4730]: I0131 16:30:33.815387 4730 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 16:30:33 crc kubenswrapper[4730]: I0131 16:30:33.815448 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 16:30:33 crc kubenswrapper[4730]: I0131 16:30:33.821214 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.374956 4730 apiserver.go:52] "Watching apiserver" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.378617 4730 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.379146 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.379553 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.379560 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.379997 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 16:30:34 crc kubenswrapper[4730]: E0131 16:30:34.380082 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.379966 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.379935 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:34 crc kubenswrapper[4730]: E0131 16:30:34.380431 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.379908 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:34 crc kubenswrapper[4730]: E0131 16:30:34.380603 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.382912 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.383235 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.383773 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.384290 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.384860 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.385230 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.385338 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.388243 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.392956 4730 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.393648 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.404402 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 10:36:53.025015657 +0000 UTC Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.413714 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.427973 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433008 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433051 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433070 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433091 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433109 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433142 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433162 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433182 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433200 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433218 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433238 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433260 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433280 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433305 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433329 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433352 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433378 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433401 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433423 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433445 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433466 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433489 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433509 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433531 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433553 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433575 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433595 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433617 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433639 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433664 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433738 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433761 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433785 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433830 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433856 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433882 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433904 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433931 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433955 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.433979 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434002 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434032 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434056 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434076 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434096 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434118 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434142 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434167 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434221 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434243 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434265 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434287 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434310 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434333 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434358 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434381 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434407 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434429 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434451 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434472 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434494 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434519 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434553 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434574 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434593 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434583 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434613 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434636 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434660 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434640 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434683 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434720 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434746 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434771 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434814 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434843 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434866 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434888 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434910 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434937 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434965 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434988 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435008 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435032 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435055 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435077 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435106 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435129 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435151 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435175 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435196 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435219 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435243 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435302 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435329 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435354 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435383 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435410 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435436 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435474 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435499 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435523 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435546 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435570 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435593 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435616 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435753 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435784 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435836 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435861 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435883 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435909 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435933 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435957 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435983 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436008 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436032 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436059 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436085 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436111 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436135 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436161 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436186 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436211 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436237 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436262 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436285 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436308 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436334 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436357 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436380 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436407 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436431 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436457 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436485 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436510 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436534 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436559 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436583 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436611 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436633 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436655 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436692 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436721 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436748 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436774 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436815 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436909 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436940 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436970 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436995 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437022 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437084 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437111 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437139 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437169 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437194 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437220 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437247 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437274 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437303 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437330 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437355 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437382 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437406 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437433 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437459 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437485 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437513 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437538 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437562 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437586 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437613 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437642 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437669 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437699 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437725 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437751 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437779 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439122 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439158 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439188 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439220 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439250 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439280 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439306 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439336 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439364 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439389 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439430 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439464 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439494 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439521 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439548 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439574 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439600 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439629 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439682 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439714 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439775 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439832 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439862 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439889 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439919 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439949 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439981 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.440015 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.440045 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.440109 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.440146 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.440174 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.440227 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434674 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434841 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.434905 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435059 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435212 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435301 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.441744 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435348 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435369 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435549 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435553 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435630 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435700 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435865 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.435979 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436015 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436128 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436163 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436291 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436307 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436689 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436847 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436976 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437107 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437129 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437316 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437432 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437568 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.436778 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437620 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.437866 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.438041 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.438170 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.438374 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.438610 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.438765 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.438981 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.438966 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439029 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439178 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439388 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439672 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439862 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439869 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.439967 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.440341 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.440480 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.441010 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.441199 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.441488 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.442295 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.442579 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.442861 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.443481 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.443831 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.444214 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.444564 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.445075 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.445225 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.445236 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.445372 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.445511 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.445674 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.445647 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.445914 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.446009 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.446182 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.446338 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.446483 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.446597 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.446646 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.446738 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.446977 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.447603 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.447706 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.447903 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.448131 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.448596 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.449231 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.449359 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.449976 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.450196 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.450595 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.450930 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.451136 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.451354 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.451430 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.451635 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.451848 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.451994 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.452554 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.452988 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.453399 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.453486 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.453903 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.454352 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.460970 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.467957 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.468269 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.469074 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.469333 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.469750 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.470066 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.470349 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.470731 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.471182 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.471694 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.471968 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.473300 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.473628 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.473677 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.473883 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.474085 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.474226 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.474277 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.474641 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.474884 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.476146 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.476173 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.476514 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.476669 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.477008 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.477753 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.478187 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.478233 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.487926 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.489921 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.490095 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.490450 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.490525 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.490782 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.491060 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.491605 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.492390 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.492695 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.492922 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.493011 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.493255 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.493349 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.493484 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.493765 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.495039 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.500463 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.500789 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.501323 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.501680 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.501979 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.502226 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.502491 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.502672 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.502720 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.503108 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.503131 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.503418 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.503494 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.503732 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.503895 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.503994 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.504023 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.504177 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.507027 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.507090 4730 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.508574 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.509173 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.509671 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.510159 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.510242 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.510728 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.511160 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.511296 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.511518 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.511899 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.512189 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.512539 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.512547 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.513967 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: E0131 16:30:34.514160 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:30:35.014091946 +0000 UTC m=+21.820148862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.514404 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.514915 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.515675 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.515828 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: E0131 16:30:34.515863 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.515973 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: E0131 16:30:34.516243 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:35.016215048 +0000 UTC m=+21.822271964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 16:30:34 crc kubenswrapper[4730]: E0131 16:30:34.515899 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 16:30:34 crc kubenswrapper[4730]: E0131 16:30:34.516870 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:35.016858777 +0000 UTC m=+21.822915703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.516646 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.519450 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.520220 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: E0131 16:30:34.520490 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 16:30:34 crc kubenswrapper[4730]: E0131 16:30:34.520517 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 16:30:34 crc kubenswrapper[4730]: E0131 16:30:34.520533 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:34 crc kubenswrapper[4730]: E0131 16:30:34.520604 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:35.020583085 +0000 UTC m=+21.826640001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.521138 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.521454 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.530193 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.541332 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.541429 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.541722 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542153 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542321 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542325 4730 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542386 4730 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542402 4730 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542418 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542434 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542451 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542466 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542480 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542496 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542511 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542525 4730 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542540 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542556 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542571 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542687 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542715 4730 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542729 4730 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542749 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542773 4730 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542785 4730 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542810 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542832 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542842 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542852 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542862 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542871 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542884 4730 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542899 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542912 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542923 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542934 4730 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542947 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542960 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542971 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542984 4730 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.542995 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543001 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543006 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543070 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543087 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543106 4730 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543121 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543137 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543151 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543163 4730 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543177 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543193 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543207 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543221 4730 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543234 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543248 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543263 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543281 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543300 4730 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543315 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543328 4730 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543342 4730 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543355 4730 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543373 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543387 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543402 4730 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543416 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543429 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543443 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543456 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543468 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543482 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543494 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543507 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543521 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543536 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543549 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543562 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543577 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543589 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543601 4730 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543624 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543636 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543649 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543664 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543676 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543690 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543708 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543725 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543742 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543757 4730 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543770 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543783 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543816 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543831 4730 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543844 4730 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543859 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543877 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543892 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543905 4730 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543919 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543932 4730 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543944 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543958 4730 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543972 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543985 4730 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543997 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544012 4730 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544028 4730 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544042 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544053 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544065 4730 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544078 4730 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544090 4730 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544102 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544116 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544130 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544143 4730 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544155 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544169 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544181 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544196 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544213 4730 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544226 4730 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544239 4730 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544252 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544265 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544282 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544300 4730 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544318 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544331 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544344 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544358 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544371 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544384 4730 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544397 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544410 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544424 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544438 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544451 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544464 4730 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544477 4730 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544490 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544502 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544515 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544538 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544551 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544565 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544577 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544616 4730 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544629 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544641 4730 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544654 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544666 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544678 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544691 4730 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544705 4730 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544717 4730 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544729 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544742 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544755 4730 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544768 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544781 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544793 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544824 4730 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544836 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544848 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544861 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544873 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544887 4730 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544901 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544914 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544925 4730 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544938 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544952 4730 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544965 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544980 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544995 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.545007 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.545019 4730 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.545032 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.545044 4730 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.545057 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543516 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543615 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: E0131 16:30:34.545078 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 16:30:34 crc kubenswrapper[4730]: E0131 16:30:34.545117 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 16:30:34 crc kubenswrapper[4730]: E0131 16:30:34.545137 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:34 crc kubenswrapper[4730]: E0131 16:30:34.545227 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:35.0452025 +0000 UTC m=+21.851259646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543604 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543757 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.543761 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.544477 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.546445 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.547177 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.549463 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.550186 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.550713 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.550819 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.551028 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.551906 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.552306 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.552507 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.552736 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.556658 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.557503 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.558967 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.563084 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.564427 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325" exitCode=255 Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.566991 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: E0131 16:30:34.573000 4730 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.573252 4730 scope.go:117] "RemoveContainer" containerID="9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.580580 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.588603 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.599766 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.614394 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.624165 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.625555 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.629726 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.631077 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.632981 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.634221 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.635053 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.644533 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.645780 4730 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.645834 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.645849 4730 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.645877 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.645892 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.645906 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.645921 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.645934 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.645948 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.645960 4730 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.645973 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.645985 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.645997 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.646010 4730 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.659121 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.671351 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.686023 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.697455 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.697840 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.705022 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 16:30:34 crc kubenswrapper[4730]: W0131 16:30:34.709543 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-acb74fd22ea58f23233442e137ba3330ca41d413ee702fae0d07f1dfc2b03feb WatchSource:0}: Error finding container acb74fd22ea58f23233442e137ba3330ca41d413ee702fae0d07f1dfc2b03feb: Status 404 returned error can't find the container with id acb74fd22ea58f23233442e137ba3330ca41d413ee702fae0d07f1dfc2b03feb Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.713049 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.714629 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.719987 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.720621 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.721600 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.722174 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.731250 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.732313 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.744607 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.755948 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.762422 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.767352 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.768064 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.768399 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.769194 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.771290 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.794878 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.796957 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.797695 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.799603 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325"} Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.800919 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.848242 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:34 crc kubenswrapper[4730]: I0131 16:30:34.848608 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.049274 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:30:35 crc kubenswrapper[4730]: E0131 16:30:35.049378 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:30:36.049359014 +0000 UTC m=+22.855415930 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.049444 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.049468 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:35 crc kubenswrapper[4730]: E0131 16:30:35.049562 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.049608 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:35 crc kubenswrapper[4730]: E0131 16:30:35.049680 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 16:30:35 crc kubenswrapper[4730]: E0131 16:30:35.049703 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:36.049695194 +0000 UTC m=+22.855752110 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 16:30:35 crc kubenswrapper[4730]: E0131 16:30:35.049718 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:36.049712654 +0000 UTC m=+22.855769570 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 16:30:35 crc kubenswrapper[4730]: E0131 16:30:35.049727 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 16:30:35 crc kubenswrapper[4730]: E0131 16:30:35.049753 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 16:30:35 crc kubenswrapper[4730]: E0131 16:30:35.049765 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.049732 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:35 crc kubenswrapper[4730]: E0131 16:30:35.049837 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:36.049820327 +0000 UTC m=+22.855877243 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:35 crc kubenswrapper[4730]: E0131 16:30:35.049836 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 16:30:35 crc kubenswrapper[4730]: E0131 16:30:35.049854 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 16:30:35 crc kubenswrapper[4730]: E0131 16:30:35.049864 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:35 crc kubenswrapper[4730]: E0131 16:30:35.049902 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:36.049895389 +0000 UTC m=+22.855952305 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.102733 4730 csr.go:261] certificate signing request csr-kdp7z is approved, waiting to be issued Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.117741 4730 csr.go:257] certificate signing request csr-kdp7z is issued Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.405055 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:50:48.570305826 +0000 UTC Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.463628 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:35 crc kubenswrapper[4730]: E0131 16:30:35.463754 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.568891 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e72594efaa9de743a928f6da4f2b70cc6352040fc6b6186001af6cc87267d879"} Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.570352 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e"} Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.570376 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"acb74fd22ea58f23233442e137ba3330ca41d413ee702fae0d07f1dfc2b03feb"} Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.572551 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.574165 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383"} Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.574438 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.575973 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d"} Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.576013 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830"} Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.576029 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ba9bbfecca546d58d36bb037df94e3b64edb5edb177877730b5ca33f65bc4afb"} Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.597659 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.632121 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.652640 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.687507 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.701321 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.716852 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.732582 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.749966 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.763112 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.774506 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.783390 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.792610 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.801841 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.813598 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.936415 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-bndmc"] Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.936951 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.939302 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.939510 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.939534 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.939648 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.940076 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.940185 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-5f4md"] Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.940381 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-c8lpn"] Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.940539 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5f4md" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.940571 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-c8lpn" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.942641 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.953657 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.953682 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.953741 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.955317 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.957871 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mzg47"] Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.958412 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.962980 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.963002 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.963423 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.963533 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.965562 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 16:30:35 crc kubenswrapper[4730]: I0131 16:30:35.970880 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:35Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.000154 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:35Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.016858 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.038370 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.058648 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.058755 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/47cbebb1-b682-4013-a2d5-7ca2f47f03e6-rootfs\") pod \"machine-config-daemon-mzg47\" (UID: \"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\") " pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.058775 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-system-cni-dir\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.058790 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-etc-kubernetes\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: E0131 16:30:36.058850 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:30:38.058814045 +0000 UTC m=+24.864870961 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.058892 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-var-lib-cni-multus\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.058926 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-run-netns\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.058946 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/77b7e075-5b61-4efb-9138-4a40f1588cd4-cnibin\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.058961 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/77b7e075-5b61-4efb-9138-4a40f1588cd4-os-release\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.058980 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/47cbebb1-b682-4013-a2d5-7ca2f47f03e6-mcd-auth-proxy-config\") pod \"machine-config-daemon-mzg47\" (UID: \"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\") " pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.058997 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-multus-cni-dir\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059013 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2d1c5cbc-307d-4556-b162-2c5c0103662d-cni-binary-copy\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059030 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-multus-socket-dir-parent\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059045 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-run-multus-certs\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059061 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/77b7e075-5b61-4efb-9138-4a40f1588cd4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059082 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79ld7\" (UniqueName: \"kubernetes.io/projected/77b7e075-5b61-4efb-9138-4a40f1588cd4-kube-api-access-79ld7\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059096 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6czwd\" (UniqueName: \"kubernetes.io/projected/2d1c5cbc-307d-4556-b162-2c5c0103662d-kube-api-access-6czwd\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059113 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-multus-conf-dir\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059154 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f3579c4f-c5ac-4bbb-b907-d472dcf735fe-hosts-file\") pod \"node-resolver-5f4md\" (UID: \"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\") " pod="openshift-dns/node-resolver-5f4md" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059174 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/77b7e075-5b61-4efb-9138-4a40f1588cd4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059195 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgxsr\" (UniqueName: \"kubernetes.io/projected/47cbebb1-b682-4013-a2d5-7ca2f47f03e6-kube-api-access-jgxsr\") pod \"machine-config-daemon-mzg47\" (UID: \"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\") " pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059212 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-hostroot\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059227 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-var-lib-kubelet\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059246 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpc2t\" (UniqueName: \"kubernetes.io/projected/f3579c4f-c5ac-4bbb-b907-d472dcf735fe-kube-api-access-tpc2t\") pod \"node-resolver-5f4md\" (UID: \"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\") " pod="openshift-dns/node-resolver-5f4md" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059270 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059287 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/47cbebb1-b682-4013-a2d5-7ca2f47f03e6-proxy-tls\") pod \"machine-config-daemon-mzg47\" (UID: \"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\") " pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059301 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-cnibin\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059315 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-var-lib-cni-bin\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059329 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2d1c5cbc-307d-4556-b162-2c5c0103662d-multus-daemon-config\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059345 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059363 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059378 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-os-release\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059395 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/77b7e075-5b61-4efb-9138-4a40f1588cd4-cni-binary-copy\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059410 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-run-k8s-cni-cncf-io\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059427 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/77b7e075-5b61-4efb-9138-4a40f1588cd4-system-cni-dir\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.059447 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:36 crc kubenswrapper[4730]: E0131 16:30:36.059554 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 16:30:36 crc kubenswrapper[4730]: E0131 16:30:36.059567 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 16:30:36 crc kubenswrapper[4730]: E0131 16:30:36.059578 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:36 crc kubenswrapper[4730]: E0131 16:30:36.059609 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:38.059602358 +0000 UTC m=+24.865659274 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:36 crc kubenswrapper[4730]: E0131 16:30:36.059731 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 16:30:36 crc kubenswrapper[4730]: E0131 16:30:36.059766 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:38.059745412 +0000 UTC m=+24.865802328 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 16:30:36 crc kubenswrapper[4730]: E0131 16:30:36.059850 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 16:30:36 crc kubenswrapper[4730]: E0131 16:30:36.059860 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 16:30:36 crc kubenswrapper[4730]: E0131 16:30:36.059867 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:36 crc kubenswrapper[4730]: E0131 16:30:36.059886 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:38.059880576 +0000 UTC m=+24.865937492 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:36 crc kubenswrapper[4730]: E0131 16:30:36.059911 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 16:30:36 crc kubenswrapper[4730]: E0131 16:30:36.059930 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:38.059925537 +0000 UTC m=+24.865982453 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.064239 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.108670 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.119355 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-31 16:25:35 +0000 UTC, rotation deadline is 2026-11-11 15:03:15.575901882 +0000 UTC Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.119394 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6814h32m39.456510377s for next certificate rotation Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.141046 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.159974 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/47cbebb1-b682-4013-a2d5-7ca2f47f03e6-rootfs\") pod \"machine-config-daemon-mzg47\" (UID: \"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\") " pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160018 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-system-cni-dir\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160036 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-etc-kubernetes\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160056 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-run-netns\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160073 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-var-lib-cni-multus\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160091 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/77b7e075-5b61-4efb-9138-4a40f1588cd4-cnibin\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160106 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/77b7e075-5b61-4efb-9138-4a40f1588cd4-os-release\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160124 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/77b7e075-5b61-4efb-9138-4a40f1588cd4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160138 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79ld7\" (UniqueName: \"kubernetes.io/projected/77b7e075-5b61-4efb-9138-4a40f1588cd4-kube-api-access-79ld7\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160127 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/47cbebb1-b682-4013-a2d5-7ca2f47f03e6-rootfs\") pod \"machine-config-daemon-mzg47\" (UID: \"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\") " pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160155 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/47cbebb1-b682-4013-a2d5-7ca2f47f03e6-mcd-auth-proxy-config\") pod \"machine-config-daemon-mzg47\" (UID: \"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\") " pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160208 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-etc-kubernetes\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160245 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-var-lib-cni-multus\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160305 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-multus-cni-dir\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160326 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-run-netns\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160338 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2d1c5cbc-307d-4556-b162-2c5c0103662d-cni-binary-copy\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160205 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/77b7e075-5b61-4efb-9138-4a40f1588cd4-cnibin\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160359 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-multus-socket-dir-parent\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160378 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-run-multus-certs\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160393 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6czwd\" (UniqueName: \"kubernetes.io/projected/2d1c5cbc-307d-4556-b162-2c5c0103662d-kube-api-access-6czwd\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160411 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-multus-conf-dir\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160432 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/77b7e075-5b61-4efb-9138-4a40f1588cd4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160431 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-system-cni-dir\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160457 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f3579c4f-c5ac-4bbb-b907-d472dcf735fe-hosts-file\") pod \"node-resolver-5f4md\" (UID: \"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\") " pod="openshift-dns/node-resolver-5f4md" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160476 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgxsr\" (UniqueName: \"kubernetes.io/projected/47cbebb1-b682-4013-a2d5-7ca2f47f03e6-kube-api-access-jgxsr\") pod \"machine-config-daemon-mzg47\" (UID: \"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\") " pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160495 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-var-lib-kubelet\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160512 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-hostroot\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160562 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/77b7e075-5b61-4efb-9138-4a40f1588cd4-os-release\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160567 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/47cbebb1-b682-4013-a2d5-7ca2f47f03e6-proxy-tls\") pod \"machine-config-daemon-mzg47\" (UID: \"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\") " pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160586 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-multus-socket-dir-parent\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160601 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-multus-cni-dir\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160623 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpc2t\" (UniqueName: \"kubernetes.io/projected/f3579c4f-c5ac-4bbb-b907-d472dcf735fe-kube-api-access-tpc2t\") pod \"node-resolver-5f4md\" (UID: \"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\") " pod="openshift-dns/node-resolver-5f4md" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160658 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-cnibin\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160673 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-var-lib-cni-bin\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160689 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2d1c5cbc-307d-4556-b162-2c5c0103662d-multus-daemon-config\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160741 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-os-release\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160758 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/77b7e075-5b61-4efb-9138-4a40f1588cd4-cni-binary-copy\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160783 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-run-k8s-cni-cncf-io\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160832 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/77b7e075-5b61-4efb-9138-4a40f1588cd4-system-cni-dir\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160888 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-run-multus-certs\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160897 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/77b7e075-5b61-4efb-9138-4a40f1588cd4-system-cni-dir\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160896 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/47cbebb1-b682-4013-a2d5-7ca2f47f03e6-mcd-auth-proxy-config\") pod \"machine-config-daemon-mzg47\" (UID: \"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\") " pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.160968 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-multus-conf-dir\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.161001 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-var-lib-kubelet\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.161037 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f3579c4f-c5ac-4bbb-b907-d472dcf735fe-hosts-file\") pod \"node-resolver-5f4md\" (UID: \"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\") " pod="openshift-dns/node-resolver-5f4md" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.161041 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-hostroot\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.161191 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2d1c5cbc-307d-4556-b162-2c5c0103662d-cni-binary-copy\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.161234 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-run-k8s-cni-cncf-io\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.161240 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/77b7e075-5b61-4efb-9138-4a40f1588cd4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.161258 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-os-release\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.161275 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-cnibin\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.161290 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d1c5cbc-307d-4556-b162-2c5c0103662d-host-var-lib-cni-bin\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.161353 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/77b7e075-5b61-4efb-9138-4a40f1588cd4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.161488 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2d1c5cbc-307d-4556-b162-2c5c0103662d-multus-daemon-config\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.161585 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/77b7e075-5b61-4efb-9138-4a40f1588cd4-cni-binary-copy\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.165016 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/47cbebb1-b682-4013-a2d5-7ca2f47f03e6-proxy-tls\") pod \"machine-config-daemon-mzg47\" (UID: \"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\") " pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.172112 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.189083 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6czwd\" (UniqueName: \"kubernetes.io/projected/2d1c5cbc-307d-4556-b162-2c5c0103662d-kube-api-access-6czwd\") pod \"multus-c8lpn\" (UID: \"2d1c5cbc-307d-4556-b162-2c5c0103662d\") " pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.202976 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79ld7\" (UniqueName: \"kubernetes.io/projected/77b7e075-5b61-4efb-9138-4a40f1588cd4-kube-api-access-79ld7\") pod \"multus-additional-cni-plugins-bndmc\" (UID: \"77b7e075-5b61-4efb-9138-4a40f1588cd4\") " pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.212091 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgxsr\" (UniqueName: \"kubernetes.io/projected/47cbebb1-b682-4013-a2d5-7ca2f47f03e6-kube-api-access-jgxsr\") pod \"machine-config-daemon-mzg47\" (UID: \"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\") " pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.220345 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpc2t\" (UniqueName: \"kubernetes.io/projected/f3579c4f-c5ac-4bbb-b907-d472dcf735fe-kube-api-access-tpc2t\") pod \"node-resolver-5f4md\" (UID: \"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\") " pod="openshift-dns/node-resolver-5f4md" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.243325 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.250208 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bndmc" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.257266 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-c8lpn" Jan 31 16:30:36 crc kubenswrapper[4730]: W0131 16:30:36.264447 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77b7e075_5b61_4efb_9138_4a40f1588cd4.slice/crio-b58862fb2ffd2e4f495100565d74d9d33cd407cb9638822854fb1c76b95d155e WatchSource:0}: Error finding container b58862fb2ffd2e4f495100565d74d9d33cd407cb9638822854fb1c76b95d155e: Status 404 returned error can't find the container with id b58862fb2ffd2e4f495100565d74d9d33cd407cb9638822854fb1c76b95d155e Jan 31 16:30:36 crc kubenswrapper[4730]: W0131 16:30:36.267451 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d1c5cbc_307d_4556_b162_2c5c0103662d.slice/crio-5f7055b837f5bdac907aca8b1019b9e32c4f6b1c2b394a3a065123b55df4acbd WatchSource:0}: Error finding container 5f7055b837f5bdac907aca8b1019b9e32c4f6b1c2b394a3a065123b55df4acbd: Status 404 returned error can't find the container with id 5f7055b837f5bdac907aca8b1019b9e32c4f6b1c2b394a3a065123b55df4acbd Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.269154 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5f4md" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.272032 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.275128 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.296230 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: W0131 16:30:36.307889 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3579c4f_c5ac_4bbb_b907_d472dcf735fe.slice/crio-584eda81d62ac4d287993461b8587293eb5b943b376f40033eb145ea66294a8c WatchSource:0}: Error finding container 584eda81d62ac4d287993461b8587293eb5b943b376f40033eb145ea66294a8c: Status 404 returned error can't find the container with id 584eda81d62ac4d287993461b8587293eb5b943b376f40033eb145ea66294a8c Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.322063 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.345377 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.376140 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.379157 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-25nsf"] Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.379875 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.384952 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.385245 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.385351 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.385476 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.386008 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.386159 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.388874 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.401395 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.406984 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 16:54:58.536688048 +0000 UTC Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.425900 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.452304 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463158 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-ovn\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463194 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovnkube-config\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463213 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-run-ovn-kubernetes\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463231 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovnkube-script-lib\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463249 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-etc-openvswitch\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463265 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463282 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlj7c\" (UniqueName: \"kubernetes.io/projected/8e53a6e0-ca28-4088-8ced-22ba134f316e-kube-api-access-mlj7c\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463298 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-systemd-units\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463312 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-systemd\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463326 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovn-node-metrics-cert\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463341 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-run-netns\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463354 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-openvswitch\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463368 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-log-socket\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463393 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-cni-netd\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463416 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-slash\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463429 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-var-lib-openvswitch\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463443 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-env-overrides\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463457 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-kubelet\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463469 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-cni-bin\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463483 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-node-log\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463640 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:36 crc kubenswrapper[4730]: E0131 16:30:36.463727 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.463860 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:36 crc kubenswrapper[4730]: E0131 16:30:36.463911 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.468670 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.469398 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.470856 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.471494 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.472453 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.473039 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.473539 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.474713 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.478262 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.478515 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.481310 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.481839 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.485477 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.485980 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.486857 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.487440 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.487902 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.492179 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.492638 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.495108 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.495696 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.496992 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.497565 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.498914 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.499386 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.499957 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.500747 4730 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.500853 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.505104 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.506349 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.507234 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.524639 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.544115 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.566885 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-ovn\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.566920 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovnkube-config\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.566942 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-run-ovn-kubernetes\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.566960 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovnkube-script-lib\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.566977 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.566992 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-etc-openvswitch\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567010 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlj7c\" (UniqueName: \"kubernetes.io/projected/8e53a6e0-ca28-4088-8ced-22ba134f316e-kube-api-access-mlj7c\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567025 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-systemd-units\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567038 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-systemd\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567054 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovn-node-metrics-cert\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567071 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-openvswitch\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567085 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-run-netns\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567097 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-log-socket\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567113 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-cni-netd\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567131 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-slash\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567145 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-var-lib-openvswitch\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567159 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-env-overrides\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567182 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-kubelet\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567196 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-cni-bin\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567224 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-node-log\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567278 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-node-log\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567312 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-ovn\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567832 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-openvswitch\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567870 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovnkube-config\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567887 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-slash\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567902 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-run-ovn-kubernetes\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567909 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-var-lib-openvswitch\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.567934 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-kubelet\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.568020 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-cni-bin\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.568279 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-run-netns\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.568305 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-log-socket\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.568366 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-cni-netd\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.568385 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.568375 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-systemd\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.568401 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-etc-openvswitch\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.568419 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-systemd-units\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.568542 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovnkube-script-lib\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.568624 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-env-overrides\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.572586 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovn-node-metrics-cert\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.577576 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.586067 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlj7c\" (UniqueName: \"kubernetes.io/projected/8e53a6e0-ca28-4088-8ced-22ba134f316e-kube-api-access-mlj7c\") pod \"ovnkube-node-25nsf\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.588831 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c8lpn" event={"ID":"2d1c5cbc-307d-4556-b162-2c5c0103662d","Type":"ContainerStarted","Data":"b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a"} Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.588870 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c8lpn" event={"ID":"2d1c5cbc-307d-4556-b162-2c5c0103662d","Type":"ContainerStarted","Data":"5f7055b837f5bdac907aca8b1019b9e32c4f6b1c2b394a3a065123b55df4acbd"} Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.590046 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5f4md" event={"ID":"f3579c4f-c5ac-4bbb-b907-d472dcf735fe","Type":"ContainerStarted","Data":"584eda81d62ac4d287993461b8587293eb5b943b376f40033eb145ea66294a8c"} Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.590958 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerStarted","Data":"50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c"} Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.590997 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerStarted","Data":"c7e1c1ed76bfd14b4793489b457bb6bb25753b7683f3069f39b30c2ec583af41"} Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.592904 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" event={"ID":"77b7e075-5b61-4efb-9138-4a40f1588cd4","Type":"ContainerStarted","Data":"b58862fb2ffd2e4f495100565d74d9d33cd407cb9638822854fb1c76b95d155e"} Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.610504 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.627941 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.650549 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.665576 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.676036 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.687105 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.697571 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.701843 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.714083 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.734688 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.765406 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.788546 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.802785 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.833183 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.854226 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.873673 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.888050 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.903266 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.918132 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.941033 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.953957 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:36 crc kubenswrapper[4730]: I0131 16:30:36.984744 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:36Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.407986 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 07:05:45.236672529 +0000 UTC Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.463445 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:37 crc kubenswrapper[4730]: E0131 16:30:37.463560 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.596920 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerID="399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81" exitCode=0 Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.596984 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerDied","Data":"399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81"} Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.597009 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerStarted","Data":"ede295a0e698071c578b7e237e2fb7363ca4e7760498d6e8ea8b7e35a3b563c7"} Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.599858 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerStarted","Data":"1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493"} Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.602292 4730 generic.go:334] "Generic (PLEG): container finished" podID="77b7e075-5b61-4efb-9138-4a40f1588cd4" containerID="9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4" exitCode=0 Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.602607 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" event={"ID":"77b7e075-5b61-4efb-9138-4a40f1588cd4","Type":"ContainerDied","Data":"9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4"} Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.604229 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491"} Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.606680 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5f4md" event={"ID":"f3579c4f-c5ac-4bbb-b907-d472dcf735fe","Type":"ContainerStarted","Data":"791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141"} Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.617363 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.633512 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.650973 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.663493 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.677277 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.691342 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.708337 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.719875 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.731982 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.744937 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.759632 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.777494 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.793701 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.806673 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.817498 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.831222 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.843448 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.857721 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.867843 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.879281 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.897944 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.915755 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7p26r"] Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.916104 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7p26r" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.917633 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.917876 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.918013 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.919217 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.920063 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.932150 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.947867 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.964108 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.976231 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.980765 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld9hq\" (UniqueName: \"kubernetes.io/projected/fbb1945b-e8d1-4041-bdf9-24573064e93a-kube-api-access-ld9hq\") pod \"node-ca-7p26r\" (UID: \"fbb1945b-e8d1-4041-bdf9-24573064e93a\") " pod="openshift-image-registry/node-ca-7p26r" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.980833 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fbb1945b-e8d1-4041-bdf9-24573064e93a-serviceca\") pod \"node-ca-7p26r\" (UID: \"fbb1945b-e8d1-4041-bdf9-24573064e93a\") " pod="openshift-image-registry/node-ca-7p26r" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.980854 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fbb1945b-e8d1-4041-bdf9-24573064e93a-host\") pod \"node-ca-7p26r\" (UID: \"fbb1945b-e8d1-4041-bdf9-24573064e93a\") " pod="openshift-image-registry/node-ca-7p26r" Jan 31 16:30:37 crc kubenswrapper[4730]: I0131 16:30:37.987945 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:37Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.001967 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.014511 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.028415 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.045600 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.063228 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.077376 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.081912 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:30:38 crc kubenswrapper[4730]: E0131 16:30:38.082066 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:30:42.082042322 +0000 UTC m=+28.888099238 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.082110 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld9hq\" (UniqueName: \"kubernetes.io/projected/fbb1945b-e8d1-4041-bdf9-24573064e93a-kube-api-access-ld9hq\") pod \"node-ca-7p26r\" (UID: \"fbb1945b-e8d1-4041-bdf9-24573064e93a\") " pod="openshift-image-registry/node-ca-7p26r" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.082158 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fbb1945b-e8d1-4041-bdf9-24573064e93a-serviceca\") pod \"node-ca-7p26r\" (UID: \"fbb1945b-e8d1-4041-bdf9-24573064e93a\") " pod="openshift-image-registry/node-ca-7p26r" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.082182 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fbb1945b-e8d1-4041-bdf9-24573064e93a-host\") pod \"node-ca-7p26r\" (UID: \"fbb1945b-e8d1-4041-bdf9-24573064e93a\") " pod="openshift-image-registry/node-ca-7p26r" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.082285 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.082332 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.082350 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.082387 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fbb1945b-e8d1-4041-bdf9-24573064e93a-host\") pod \"node-ca-7p26r\" (UID: \"fbb1945b-e8d1-4041-bdf9-24573064e93a\") " pod="openshift-image-registry/node-ca-7p26r" Jan 31 16:30:38 crc kubenswrapper[4730]: E0131 16:30:38.082400 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 16:30:38 crc kubenswrapper[4730]: E0131 16:30:38.082466 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 16:30:38 crc kubenswrapper[4730]: E0131 16:30:38.082490 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 16:30:38 crc kubenswrapper[4730]: E0131 16:30:38.082503 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:38 crc kubenswrapper[4730]: E0131 16:30:38.082515 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 16:30:38 crc kubenswrapper[4730]: E0131 16:30:38.082522 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:42.082497275 +0000 UTC m=+28.888554231 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 16:30:38 crc kubenswrapper[4730]: E0131 16:30:38.082529 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 16:30:38 crc kubenswrapper[4730]: E0131 16:30:38.082543 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:38 crc kubenswrapper[4730]: E0131 16:30:38.082548 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:42.082534126 +0000 UTC m=+28.888591042 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:38 crc kubenswrapper[4730]: E0131 16:30:38.082574 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:42.082567017 +0000 UTC m=+28.888623933 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.082406 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:38 crc kubenswrapper[4730]: E0131 16:30:38.082901 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 16:30:38 crc kubenswrapper[4730]: E0131 16:30:38.083022 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:42.08300366 +0000 UTC m=+28.889060576 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.083581 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fbb1945b-e8d1-4041-bdf9-24573064e93a-serviceca\") pod \"node-ca-7p26r\" (UID: \"fbb1945b-e8d1-4041-bdf9-24573064e93a\") " pod="openshift-image-registry/node-ca-7p26r" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.092229 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.102170 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld9hq\" (UniqueName: \"kubernetes.io/projected/fbb1945b-e8d1-4041-bdf9-24573064e93a-kube-api-access-ld9hq\") pod \"node-ca-7p26r\" (UID: \"fbb1945b-e8d1-4041-bdf9-24573064e93a\") " pod="openshift-image-registry/node-ca-7p26r" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.105759 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.118795 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.148985 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.361863 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7p26r" Jan 31 16:30:38 crc kubenswrapper[4730]: W0131 16:30:38.381571 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb1945b_e8d1_4041_bdf9_24573064e93a.slice/crio-eb4f2e0afb3073070f22f8ced361d50d5cd77d8b9bf8a273a074383b4b86517b WatchSource:0}: Error finding container eb4f2e0afb3073070f22f8ced361d50d5cd77d8b9bf8a273a074383b4b86517b: Status 404 returned error can't find the container with id eb4f2e0afb3073070f22f8ced361d50d5cd77d8b9bf8a273a074383b4b86517b Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.408565 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 15:22:18.459297 +0000 UTC Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.463377 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.463441 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:38 crc kubenswrapper[4730]: E0131 16:30:38.463965 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:30:38 crc kubenswrapper[4730]: E0131 16:30:38.464087 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.612248 4730 generic.go:334] "Generic (PLEG): container finished" podID="77b7e075-5b61-4efb-9138-4a40f1588cd4" containerID="d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486" exitCode=0 Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.612315 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" event={"ID":"77b7e075-5b61-4efb-9138-4a40f1588cd4","Type":"ContainerDied","Data":"d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486"} Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.620724 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerStarted","Data":"393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad"} Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.620867 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerStarted","Data":"828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833"} Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.620936 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerStarted","Data":"c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1"} Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.621003 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerStarted","Data":"e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655"} Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.621061 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerStarted","Data":"77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458"} Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.621116 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerStarted","Data":"b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35"} Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.622142 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7p26r" event={"ID":"fbb1945b-e8d1-4041-bdf9-24573064e93a","Type":"ContainerStarted","Data":"eb4f2e0afb3073070f22f8ced361d50d5cd77d8b9bf8a273a074383b4b86517b"} Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.634234 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.645972 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.664027 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.680326 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.692707 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.734733 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.754836 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.799918 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.817436 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.842893 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.853556 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.868430 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.881857 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.891889 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.902132 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.911550 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.927609 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.940076 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.952871 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.966764 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.983294 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:38 crc kubenswrapper[4730]: I0131 16:30:38.994603 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:38Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.007371 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.020495 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.031207 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.043891 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.408759 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 21:48:16.548040421 +0000 UTC Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.466426 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:39 crc kubenswrapper[4730]: E0131 16:30:39.466586 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.627554 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7p26r" event={"ID":"fbb1945b-e8d1-4041-bdf9-24573064e93a","Type":"ContainerStarted","Data":"45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a"} Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.632019 4730 generic.go:334] "Generic (PLEG): container finished" podID="77b7e075-5b61-4efb-9138-4a40f1588cd4" containerID="6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65" exitCode=0 Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.632082 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" event={"ID":"77b7e075-5b61-4efb-9138-4a40f1588cd4","Type":"ContainerDied","Data":"6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65"} Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.657275 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.685220 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.709715 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.729668 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.747873 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.765033 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.778613 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.792107 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.806747 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.818627 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.833142 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.843988 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:39 crc kubenswrapper[4730]: I0131 16:30:39.859920 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:39Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.026592 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.028247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.028363 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.028428 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.028590 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.034043 4730 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.034368 4730 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.035671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.035712 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.035725 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.035744 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.035758 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:40Z","lastTransitionTime":"2026-01-31T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:40 crc kubenswrapper[4730]: E0131 16:30:40.048679 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.052327 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.052367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.052379 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.052396 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.052408 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:40Z","lastTransitionTime":"2026-01-31T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:40 crc kubenswrapper[4730]: E0131 16:30:40.066506 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.070949 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.070992 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.071010 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.071135 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.071154 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:40Z","lastTransitionTime":"2026-01-31T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:40 crc kubenswrapper[4730]: E0131 16:30:40.082148 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.085298 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.085430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.085556 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.085675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.085828 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:40Z","lastTransitionTime":"2026-01-31T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:40 crc kubenswrapper[4730]: E0131 16:30:40.099007 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.102164 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.102210 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.102227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.102247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.102264 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:40Z","lastTransitionTime":"2026-01-31T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:40 crc kubenswrapper[4730]: E0131 16:30:40.116714 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: E0131 16:30:40.116879 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.118681 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.118709 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.118717 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.118730 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.118740 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:40Z","lastTransitionTime":"2026-01-31T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.221669 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.221966 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.222151 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.222285 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.222416 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:40Z","lastTransitionTime":"2026-01-31T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.326203 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.326242 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.326251 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.326266 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.326276 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:40Z","lastTransitionTime":"2026-01-31T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.394465 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.402638 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.409045 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 05:07:41.555693012 +0000 UTC Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.413735 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.422000 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.428699 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.428876 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.428895 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.428924 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.428942 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:40Z","lastTransitionTime":"2026-01-31T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.451090 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.463964 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:40 crc kubenswrapper[4730]: E0131 16:30:40.464278 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.464691 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:40 crc kubenswrapper[4730]: E0131 16:30:40.464845 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.483154 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.501721 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.515973 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.527347 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.530852 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.530888 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.530898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.530918 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.530929 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:40Z","lastTransitionTime":"2026-01-31T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.541340 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.549640 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.559340 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.567588 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.578399 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.588483 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.606532 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.616421 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.633329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.633362 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.633375 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.633394 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.633406 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:40Z","lastTransitionTime":"2026-01-31T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.634539 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.635833 4730 generic.go:334] "Generic (PLEG): container finished" podID="77b7e075-5b61-4efb-9138-4a40f1588cd4" containerID="f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285" exitCode=0 Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.636612 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" event={"ID":"77b7e075-5b61-4efb-9138-4a40f1588cd4","Type":"ContainerDied","Data":"f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285"} Jan 31 16:30:40 crc kubenswrapper[4730]: E0131 16:30:40.645928 4730 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.647860 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.713690 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.728120 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.734720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.734749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.734760 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.734777 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.734791 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:40Z","lastTransitionTime":"2026-01-31T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.753943 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.767259 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.781941 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.796593 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.811573 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.823283 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.836602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.836635 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.836645 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.836662 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.836674 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:40Z","lastTransitionTime":"2026-01-31T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.838080 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.850386 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.864234 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.878464 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.889394 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.900950 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.912232 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.932041 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.939225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.939248 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.939257 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.939271 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.939280 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:40Z","lastTransitionTime":"2026-01-31T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.943015 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:40 crc kubenswrapper[4730]: I0131 16:30:40.993745 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:40Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.026702 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.042048 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.042085 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.042096 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.042112 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.042124 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:41Z","lastTransitionTime":"2026-01-31T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.065055 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.106810 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.145280 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.145326 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.145338 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.145357 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.145367 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:41Z","lastTransitionTime":"2026-01-31T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.148376 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.183870 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.227986 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.247975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.248006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.248015 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.248028 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.248038 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:41Z","lastTransitionTime":"2026-01-31T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.274898 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.352408 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.352446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.352456 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.352471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.352480 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:41Z","lastTransitionTime":"2026-01-31T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.410343 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 11:33:14.819602236 +0000 UTC Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.454346 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.454371 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.454379 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.454392 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.454400 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:41Z","lastTransitionTime":"2026-01-31T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.464044 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:41 crc kubenswrapper[4730]: E0131 16:30:41.464150 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.559916 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.559975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.559993 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.560010 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.560023 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:41Z","lastTransitionTime":"2026-01-31T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.641231 4730 generic.go:334] "Generic (PLEG): container finished" podID="77b7e075-5b61-4efb-9138-4a40f1588cd4" containerID="3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9" exitCode=0 Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.641314 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" event={"ID":"77b7e075-5b61-4efb-9138-4a40f1588cd4","Type":"ContainerDied","Data":"3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9"} Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.655506 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerStarted","Data":"465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875"} Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.661258 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.662334 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.662365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.662377 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.662393 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.662407 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:41Z","lastTransitionTime":"2026-01-31T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.682109 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.697026 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.718991 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.734371 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.749777 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.760370 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.768059 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.768102 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.768112 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.768127 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.768139 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:41Z","lastTransitionTime":"2026-01-31T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.771498 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.783405 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.841701 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.853705 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.863892 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.870440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.870482 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.870494 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.870513 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.870524 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:41Z","lastTransitionTime":"2026-01-31T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.876038 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.885821 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.973389 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.973727 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.973735 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.973748 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:41 crc kubenswrapper[4730]: I0131 16:30:41.973759 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:41Z","lastTransitionTime":"2026-01-31T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.075999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.076142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.076154 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.076168 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.076176 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:42Z","lastTransitionTime":"2026-01-31T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.126023 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.126101 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.126127 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.126144 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.126163 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:42 crc kubenswrapper[4730]: E0131 16:30:42.126217 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:30:50.126183829 +0000 UTC m=+36.932240745 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:30:42 crc kubenswrapper[4730]: E0131 16:30:42.126262 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 16:30:42 crc kubenswrapper[4730]: E0131 16:30:42.126263 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 16:30:42 crc kubenswrapper[4730]: E0131 16:30:42.126291 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 16:30:42 crc kubenswrapper[4730]: E0131 16:30:42.126287 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 16:30:42 crc kubenswrapper[4730]: E0131 16:30:42.126305 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:42 crc kubenswrapper[4730]: E0131 16:30:42.126375 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:50.126354404 +0000 UTC m=+36.932411320 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 16:30:42 crc kubenswrapper[4730]: E0131 16:30:42.126405 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:50.126389625 +0000 UTC m=+36.932446541 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:42 crc kubenswrapper[4730]: E0131 16:30:42.126277 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 16:30:42 crc kubenswrapper[4730]: E0131 16:30:42.126419 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:42 crc kubenswrapper[4730]: E0131 16:30:42.126438 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:50.126432036 +0000 UTC m=+36.932488952 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:42 crc kubenswrapper[4730]: E0131 16:30:42.126260 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 16:30:42 crc kubenswrapper[4730]: E0131 16:30:42.126500 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:50.126487098 +0000 UTC m=+36.932544154 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.178857 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.178889 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.178900 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.178931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.178941 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:42Z","lastTransitionTime":"2026-01-31T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.280934 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.280962 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.280974 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.280990 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.281002 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:42Z","lastTransitionTime":"2026-01-31T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.387160 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.387211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.387228 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.387248 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.387264 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:42Z","lastTransitionTime":"2026-01-31T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.411445 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 00:29:06.55129301 +0000 UTC Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.464103 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.464162 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:42 crc kubenswrapper[4730]: E0131 16:30:42.464240 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:30:42 crc kubenswrapper[4730]: E0131 16:30:42.464396 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.489110 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.489159 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.489174 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.489196 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.489211 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:42Z","lastTransitionTime":"2026-01-31T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.592120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.592403 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.592496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.592586 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.592671 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:42Z","lastTransitionTime":"2026-01-31T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.661922 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" event={"ID":"77b7e075-5b61-4efb-9138-4a40f1588cd4","Type":"ContainerDied","Data":"9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b"} Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.661783 4730 generic.go:334] "Generic (PLEG): container finished" podID="77b7e075-5b61-4efb-9138-4a40f1588cd4" containerID="9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b" exitCode=0 Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.680371 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.695091 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.695390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.695441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.695455 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.695474 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.695489 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:42Z","lastTransitionTime":"2026-01-31T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.707941 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.730380 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.749397 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.764454 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.777736 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.789057 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.811047 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.812260 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.812290 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.812302 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.812317 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.812328 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:42Z","lastTransitionTime":"2026-01-31T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.851556 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.867687 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.886053 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.902246 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.915102 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.915125 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.915133 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.915145 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.915152 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:42Z","lastTransitionTime":"2026-01-31T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:42 crc kubenswrapper[4730]: I0131 16:30:42.919919 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.017032 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.017361 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.017422 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.017490 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.017552 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:43Z","lastTransitionTime":"2026-01-31T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.120081 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.120114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.120125 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.120139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.120152 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:43Z","lastTransitionTime":"2026-01-31T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.222242 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.222267 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.222277 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.222293 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.222304 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:43Z","lastTransitionTime":"2026-01-31T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.324448 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.324497 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.324510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.324527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.324539 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:43Z","lastTransitionTime":"2026-01-31T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.412268 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 19:28:35.667594157 +0000 UTC Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.426865 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.426902 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.426915 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.426931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.426943 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:43Z","lastTransitionTime":"2026-01-31T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.463486 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:43 crc kubenswrapper[4730]: E0131 16:30:43.463605 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.529299 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.529336 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.529349 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.529365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.529377 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:43Z","lastTransitionTime":"2026-01-31T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.631419 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.631457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.631466 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.631479 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.631488 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:43Z","lastTransitionTime":"2026-01-31T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.670267 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerStarted","Data":"8386c2369584688c554ef0f95f57bf8fe40eac2abb0fafa22cc0ba3050fd52e0"} Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.671137 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.674438 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" event={"ID":"77b7e075-5b61-4efb-9138-4a40f1588cd4","Type":"ContainerStarted","Data":"a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613"} Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.688439 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.695901 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.704653 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.717241 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.731225 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.734202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.734241 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.734253 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.734271 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.734312 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:43Z","lastTransitionTime":"2026-01-31T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.741044 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.757592 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.771307 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.786414 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.807318 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.817885 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.835907 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.837359 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.837431 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.837448 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.837475 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.837495 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:43Z","lastTransitionTime":"2026-01-31T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.848279 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.862082 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.882787 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8386c2369584688c554ef0f95f57bf8fe40eac2abb0fafa22cc0ba3050fd52e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.897746 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.916324 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8386c2369584688c554ef0f95f57bf8fe40eac2abb0fafa22cc0ba3050fd52e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.930376 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.939891 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.939921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.939928 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.939942 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.939951 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:43Z","lastTransitionTime":"2026-01-31T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.946429 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.958060 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.968204 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.977945 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:43 crc kubenswrapper[4730]: I0131 16:30:43.989866 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:43Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.001190 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.015035 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.025221 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.037680 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.041598 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.041644 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.041659 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.041679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.041691 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:44Z","lastTransitionTime":"2026-01-31T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.051146 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.060225 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.143993 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.144025 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.144039 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.144087 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.144098 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:44Z","lastTransitionTime":"2026-01-31T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.227758 4730 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.246263 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.246306 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.246318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.246362 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.246376 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:44Z","lastTransitionTime":"2026-01-31T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.348250 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.348304 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.348318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.348338 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.348353 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:44Z","lastTransitionTime":"2026-01-31T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.412821 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:48:46.915909982 +0000 UTC Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.450574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.450626 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.450643 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.450686 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.450739 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:44Z","lastTransitionTime":"2026-01-31T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.463055 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.463088 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:44 crc kubenswrapper[4730]: E0131 16:30:44.463151 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:30:44 crc kubenswrapper[4730]: E0131 16:30:44.463281 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.480136 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.505521 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8386c2369584688c554ef0f95f57bf8fe40eac2abb0fafa22cc0ba3050fd52e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.525319 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.537459 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.553053 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.553125 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.553142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.553164 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.553211 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:44Z","lastTransitionTime":"2026-01-31T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.558587 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.576770 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.589696 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.604207 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.614597 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.626980 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.639494 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.654084 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.655007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.655053 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.655068 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.655088 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.655103 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:44Z","lastTransitionTime":"2026-01-31T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.665576 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.674321 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.677052 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.677549 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.698981 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.714074 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.724473 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.737391 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.757642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.757702 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.757720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.757743 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.757760 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:44Z","lastTransitionTime":"2026-01-31T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.765980 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8386c2369584688c554ef0f95f57bf8fe40eac2abb0fafa22cc0ba3050fd52e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.777841 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.790994 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.801680 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.814838 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.827131 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.841480 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.853022 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.859657 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.859687 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.859695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.859706 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.859716 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:44Z","lastTransitionTime":"2026-01-31T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.864733 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.876795 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.890886 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.961716 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.961762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.961774 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.961790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:44 crc kubenswrapper[4730]: I0131 16:30:44.961830 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:44Z","lastTransitionTime":"2026-01-31T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.064741 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.064778 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.064792 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.064835 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.064850 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:45Z","lastTransitionTime":"2026-01-31T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.182441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.182477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.182485 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.182498 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.182509 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:45Z","lastTransitionTime":"2026-01-31T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.284598 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.284632 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.284641 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.284654 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.284663 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:45Z","lastTransitionTime":"2026-01-31T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.389222 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.389247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.389255 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.389268 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.389278 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:45Z","lastTransitionTime":"2026-01-31T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.413836 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:46:36.642431002 +0000 UTC Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.463396 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:45 crc kubenswrapper[4730]: E0131 16:30:45.463504 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.491079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.491109 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.491117 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.491127 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.491153 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:45Z","lastTransitionTime":"2026-01-31T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.594741 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.594790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.594841 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.594860 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.594872 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:45Z","lastTransitionTime":"2026-01-31T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.680071 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.697109 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.697150 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.697162 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.697179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.697191 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:45Z","lastTransitionTime":"2026-01-31T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.721278 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.799224 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.799281 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.799298 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.799318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.799333 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:45Z","lastTransitionTime":"2026-01-31T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.901533 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.901564 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.901571 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.901583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:45 crc kubenswrapper[4730]: I0131 16:30:45.901592 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:45Z","lastTransitionTime":"2026-01-31T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.004732 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.004819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.004832 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.004852 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.004864 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:46Z","lastTransitionTime":"2026-01-31T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.107269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.107322 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.107339 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.107361 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.107378 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:46Z","lastTransitionTime":"2026-01-31T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.210082 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.210132 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.210148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.210169 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.210188 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:46Z","lastTransitionTime":"2026-01-31T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.312479 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.312532 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.312555 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.312583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.312603 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:46Z","lastTransitionTime":"2026-01-31T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.413935 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:18:24.060565358 +0000 UTC Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.415264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.415295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.415328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.415349 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.415363 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:46Z","lastTransitionTime":"2026-01-31T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.464068 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.464106 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:46 crc kubenswrapper[4730]: E0131 16:30:46.464258 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:30:46 crc kubenswrapper[4730]: E0131 16:30:46.464383 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.517943 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.517971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.517979 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.517992 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.518002 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:46Z","lastTransitionTime":"2026-01-31T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.620249 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.620283 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.620291 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.620324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.620334 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:46Z","lastTransitionTime":"2026-01-31T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.685109 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovnkube-controller/0.log" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.688346 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerID="8386c2369584688c554ef0f95f57bf8fe40eac2abb0fafa22cc0ba3050fd52e0" exitCode=1 Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.688398 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerDied","Data":"8386c2369584688c554ef0f95f57bf8fe40eac2abb0fafa22cc0ba3050fd52e0"} Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.689353 4730 scope.go:117] "RemoveContainer" containerID="8386c2369584688c554ef0f95f57bf8fe40eac2abb0fafa22cc0ba3050fd52e0" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.715128 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:46Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.726030 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.726270 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.726393 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.726524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.726647 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:46Z","lastTransitionTime":"2026-01-31T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.738305 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:46Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.758845 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:46Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.779272 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:46Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.795067 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:46Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.815699 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:46Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.828900 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.829164 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.829353 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.829599 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.829765 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:46Z","lastTransitionTime":"2026-01-31T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.834108 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:46Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.849294 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:46Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.870551 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:46Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.886045 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:46Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.900364 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:46Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.913197 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:46Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.926448 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:46Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.931993 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.932067 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.932082 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.932342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.932381 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:46Z","lastTransitionTime":"2026-01-31T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:46 crc kubenswrapper[4730]: I0131 16:30:46.950526 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8386c2369584688c554ef0f95f57bf8fe40eac2abb0fafa22cc0ba3050fd52e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8386c2369584688c554ef0f95f57bf8fe40eac2abb0fafa22cc0ba3050fd52e0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:30:45Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:45.724453 5960 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:30:45.724496 5960 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:45.724630 5960 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:45.725134 5960 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 16:30:45.725443 5960 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:30:45.725479 5960 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 16:30:45.725509 5960 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:30:45.725526 5960 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 16:30:45.725554 5960 factory.go:656] Stopping watch factory\\\\nI0131 16:30:45.725568 5960 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:30:45.725597 5960 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 16:30:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:46Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.034778 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.034848 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.034859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.034873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.034883 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:47Z","lastTransitionTime":"2026-01-31T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.138940 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.139004 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.139027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.139056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.139079 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:47Z","lastTransitionTime":"2026-01-31T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.241183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.241222 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.241231 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.241244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.241255 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:47Z","lastTransitionTime":"2026-01-31T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.344459 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.344499 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.344510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.344525 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.344537 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:47Z","lastTransitionTime":"2026-01-31T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.414221 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 10:52:55.551292364 +0000 UTC Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.447154 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.447191 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.447199 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.447213 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.447223 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:47Z","lastTransitionTime":"2026-01-31T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.463514 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:47 crc kubenswrapper[4730]: E0131 16:30:47.463628 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.549539 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.549593 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.549604 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.549622 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.549641 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:47Z","lastTransitionTime":"2026-01-31T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.652374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.652453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.652471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.652494 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.652547 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:47Z","lastTransitionTime":"2026-01-31T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.693721 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovnkube-controller/1.log" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.694474 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovnkube-controller/0.log" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.697429 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerID="668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb" exitCode=1 Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.697472 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerDied","Data":"668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb"} Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.697529 4730 scope.go:117] "RemoveContainer" containerID="8386c2369584688c554ef0f95f57bf8fe40eac2abb0fafa22cc0ba3050fd52e0" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.698528 4730 scope.go:117] "RemoveContainer" containerID="668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb" Jan 31 16:30:47 crc kubenswrapper[4730]: E0131 16:30:47.698860 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.719433 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:47Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.738066 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:47Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.754889 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.754940 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.754957 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.754994 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.755010 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:47Z","lastTransitionTime":"2026-01-31T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.762990 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:47Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.782042 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:47Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.805004 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:47Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.826765 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:47Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.848492 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:47Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.858566 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.858629 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.858646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.858670 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.858688 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:47Z","lastTransitionTime":"2026-01-31T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.862285 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:47Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.876276 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:47Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.891656 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:47Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.910324 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:47Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.925063 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:47Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.938130 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq"] Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.938589 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.940773 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.944217 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.960739 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.960780 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.960796 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.960838 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.960855 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:47Z","lastTransitionTime":"2026-01-31T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.964241 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8386c2369584688c554ef0f95f57bf8fe40eac2abb0fafa22cc0ba3050fd52e0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:30:45Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:45.724453 5960 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:30:45.724496 5960 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:45.724630 5960 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:45.725134 5960 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 16:30:45.725443 5960 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:30:45.725479 5960 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 16:30:45.725509 5960 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:30:45.725526 5960 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 16:30:45.725554 5960 factory.go:656] Stopping watch factory\\\\nI0131 16:30:45.725568 5960 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:30:45.725597 5960 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 16:30:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"message\\\":\\\"96 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489919 6096 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489965 6096 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490003 6096 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490001 6096 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 16:30:47.496655 6096 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:30:47.496698 6096 factory.go:656] Stopping watch factory\\\\nI0131 16:30:47.496717 6096 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:30:47.496873 6096 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 16:30:47.515064 6096 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:30:47.515087 6096 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:30:47.515150 6096 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:30:47.515181 6096 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:30:47.515256 6096 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:47Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:47 crc kubenswrapper[4730]: I0131 16:30:47.984846 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:47Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.002632 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.014693 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dbb56b3f-38e1-40f3-b28a-bfd1b3f50188-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6p6cq\" (UID: \"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.014830 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fd5k\" (UniqueName: \"kubernetes.io/projected/dbb56b3f-38e1-40f3-b28a-bfd1b3f50188-kube-api-access-2fd5k\") pod \"ovnkube-control-plane-749d76644c-6p6cq\" (UID: \"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.014873 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dbb56b3f-38e1-40f3-b28a-bfd1b3f50188-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6p6cq\" (UID: \"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.014904 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbb56b3f-38e1-40f3-b28a-bfd1b3f50188-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6p6cq\" (UID: \"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.020018 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.039053 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.051890 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.063326 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.063385 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.063406 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.063430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.063447 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:48Z","lastTransitionTime":"2026-01-31T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.065547 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.084434 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.098072 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.113038 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.116328 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fd5k\" (UniqueName: \"kubernetes.io/projected/dbb56b3f-38e1-40f3-b28a-bfd1b3f50188-kube-api-access-2fd5k\") pod \"ovnkube-control-plane-749d76644c-6p6cq\" (UID: \"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.116369 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dbb56b3f-38e1-40f3-b28a-bfd1b3f50188-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6p6cq\" (UID: \"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.116391 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbb56b3f-38e1-40f3-b28a-bfd1b3f50188-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6p6cq\" (UID: \"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.116427 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dbb56b3f-38e1-40f3-b28a-bfd1b3f50188-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6p6cq\" (UID: \"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.117291 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbb56b3f-38e1-40f3-b28a-bfd1b3f50188-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6p6cq\" (UID: \"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.117974 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dbb56b3f-38e1-40f3-b28a-bfd1b3f50188-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6p6cq\" (UID: \"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.127235 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dbb56b3f-38e1-40f3-b28a-bfd1b3f50188-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6p6cq\" (UID: \"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.129575 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.145948 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fd5k\" (UniqueName: \"kubernetes.io/projected/dbb56b3f-38e1-40f3-b28a-bfd1b3f50188-kube-api-access-2fd5k\") pod \"ovnkube-control-plane-749d76644c-6p6cq\" (UID: \"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.154068 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.165831 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.165935 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.165955 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.166432 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.166494 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:48Z","lastTransitionTime":"2026-01-31T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.187441 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8386c2369584688c554ef0f95f57bf8fe40eac2abb0fafa22cc0ba3050fd52e0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:30:45Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:45.724453 5960 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:30:45.724496 5960 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:45.724630 5960 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:45.725134 5960 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 16:30:45.725443 5960 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:30:45.725479 5960 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 16:30:45.725509 5960 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:30:45.725526 5960 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 16:30:45.725554 5960 factory.go:656] Stopping watch factory\\\\nI0131 16:30:45.725568 5960 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:30:45.725597 5960 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 16:30:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"message\\\":\\\"96 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489919 6096 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489965 6096 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490003 6096 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490001 6096 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 16:30:47.496655 6096 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:30:47.496698 6096 factory.go:656] Stopping watch factory\\\\nI0131 16:30:47.496717 6096 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:30:47.496873 6096 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 16:30:47.515064 6096 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:30:47.515087 6096 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:30:47.515150 6096 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:30:47.515181 6096 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:30:47.515256 6096 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.203339 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.222468 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.247003 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.261542 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.269767 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.269822 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.269833 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.269849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.269861 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:48Z","lastTransitionTime":"2026-01-31T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.269899 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: W0131 16:30:48.281900 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbb56b3f_38e1_40f3_b28a_bfd1b3f50188.slice/crio-e335e6f58e060011f53eb035338461c12e70f5b2c0c754d5bad90eac9100ce28 WatchSource:0}: Error finding container e335e6f58e060011f53eb035338461c12e70f5b2c0c754d5bad90eac9100ce28: Status 404 returned error can't find the container with id e335e6f58e060011f53eb035338461c12e70f5b2c0c754d5bad90eac9100ce28 Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.373427 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.373459 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.373468 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.373481 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.373492 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:48Z","lastTransitionTime":"2026-01-31T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.415312 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 18:36:38.857917115 +0000 UTC Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.463996 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.464046 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:48 crc kubenswrapper[4730]: E0131 16:30:48.464097 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:30:48 crc kubenswrapper[4730]: E0131 16:30:48.464170 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.475877 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.475901 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.475909 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.475922 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.475930 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:48Z","lastTransitionTime":"2026-01-31T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.577659 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.577704 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.577714 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.577730 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.577740 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:48Z","lastTransitionTime":"2026-01-31T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.680138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.680176 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.680184 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.680199 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.680240 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:48Z","lastTransitionTime":"2026-01-31T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.706016 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" event={"ID":"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188","Type":"ContainerStarted","Data":"3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882"} Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.706071 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" event={"ID":"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188","Type":"ContainerStarted","Data":"fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef"} Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.706086 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" event={"ID":"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188","Type":"ContainerStarted","Data":"e335e6f58e060011f53eb035338461c12e70f5b2c0c754d5bad90eac9100ce28"} Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.707965 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovnkube-controller/1.log" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.711222 4730 scope.go:117] "RemoveContainer" containerID="668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb" Jan 31 16:30:48 crc kubenswrapper[4730]: E0131 16:30:48.711378 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.718307 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.728150 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.737844 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.753513 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8386c2369584688c554ef0f95f57bf8fe40eac2abb0fafa22cc0ba3050fd52e0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:30:45Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:45.724453 5960 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:30:45.724496 5960 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:45.724630 5960 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:45.725134 5960 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 16:30:45.725443 5960 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:30:45.725479 5960 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 16:30:45.725509 5960 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:30:45.725526 5960 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 16:30:45.725554 5960 factory.go:656] Stopping watch factory\\\\nI0131 16:30:45.725568 5960 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:30:45.725597 5960 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 16:30:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"message\\\":\\\"96 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489919 6096 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489965 6096 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490003 6096 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490001 6096 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 16:30:47.496655 6096 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:30:47.496698 6096 factory.go:656] Stopping watch factory\\\\nI0131 16:30:47.496717 6096 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:30:47.496873 6096 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 16:30:47.515064 6096 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:30:47.515087 6096 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:30:47.515150 6096 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:30:47.515181 6096 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:30:47.515256 6096 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.767093 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.779847 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.782074 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.782101 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.782109 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.782123 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.782133 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:48Z","lastTransitionTime":"2026-01-31T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.792119 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.804355 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.814797 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.827920 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.836633 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.847058 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.856962 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.866003 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.875795 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.884476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.884504 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.884515 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.884531 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.884542 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:48Z","lastTransitionTime":"2026-01-31T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.888997 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.899619 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.913307 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.924726 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.937483 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.946028 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.954711 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.964862 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.974861 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.990238 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.990272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.990427 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.990566 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.990582 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:48Z","lastTransitionTime":"2026-01-31T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:48 crc kubenswrapper[4730]: I0131 16:30:48.991476 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:48Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.004385 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.013811 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.021505 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.035001 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.059363 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"message\\\":\\\"96 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489919 6096 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489965 6096 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490003 6096 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490001 6096 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 16:30:47.496655 6096 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:30:47.496698 6096 factory.go:656] Stopping watch factory\\\\nI0131 16:30:47.496717 6096 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:30:47.496873 6096 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 16:30:47.515064 6096 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:30:47.515087 6096 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:30:47.515150 6096 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:30:47.515181 6096 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:30:47.515256 6096 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.093073 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.093111 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.093124 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.093140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.093152 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:49Z","lastTransitionTime":"2026-01-31T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.195079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.195113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.195123 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.195137 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.195146 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:49Z","lastTransitionTime":"2026-01-31T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.297355 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.297416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.297434 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.297461 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.297481 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:49Z","lastTransitionTime":"2026-01-31T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.399983 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.400053 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.400077 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.400106 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.400127 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:49Z","lastTransitionTime":"2026-01-31T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.416467 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 14:49:40.146497506 +0000 UTC Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.463298 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:49 crc kubenswrapper[4730]: E0131 16:30:49.463435 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.502913 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.502973 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.502991 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.503019 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.503037 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:49Z","lastTransitionTime":"2026-01-31T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.605671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.605720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.605736 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.605756 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.605769 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:49Z","lastTransitionTime":"2026-01-31T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.707828 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.707869 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.707879 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.707898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.707910 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:49Z","lastTransitionTime":"2026-01-31T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.810442 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.810504 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.810521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.810545 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.810564 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:49Z","lastTransitionTime":"2026-01-31T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.842087 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-sg8lw"] Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.842499 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:30:49 crc kubenswrapper[4730]: E0131 16:30:49.842549 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.859276 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.873491 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.884913 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.899156 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.912569 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.912617 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.912634 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.912651 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.912665 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:49Z","lastTransitionTime":"2026-01-31T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.927195 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"message\\\":\\\"96 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489919 6096 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489965 6096 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490003 6096 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490001 6096 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 16:30:47.496655 6096 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:30:47.496698 6096 factory.go:656] Stopping watch factory\\\\nI0131 16:30:47.496717 6096 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:30:47.496873 6096 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 16:30:47.515064 6096 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:30:47.515087 6096 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:30:47.515150 6096 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:30:47.515181 6096 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:30:47.515256 6096 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.936356 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs\") pod \"network-metrics-daemon-sg8lw\" (UID: \"39ef74a4-f27d-498b-8bbd-aae64590d030\") " pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.936428 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvw5f\" (UniqueName: \"kubernetes.io/projected/39ef74a4-f27d-498b-8bbd-aae64590d030-kube-api-access-fvw5f\") pod \"network-metrics-daemon-sg8lw\" (UID: \"39ef74a4-f27d-498b-8bbd-aae64590d030\") " pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.941899 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.958384 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.977013 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:49 crc kubenswrapper[4730]: I0131 16:30:49.989218 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:49Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.007458 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.016205 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.016264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.016286 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.016315 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.016337 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:50Z","lastTransitionTime":"2026-01-31T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.022411 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.037423 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvw5f\" (UniqueName: \"kubernetes.io/projected/39ef74a4-f27d-498b-8bbd-aae64590d030-kube-api-access-fvw5f\") pod \"network-metrics-daemon-sg8lw\" (UID: \"39ef74a4-f27d-498b-8bbd-aae64590d030\") " pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.037519 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs\") pod \"network-metrics-daemon-sg8lw\" (UID: \"39ef74a4-f27d-498b-8bbd-aae64590d030\") " pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.037643 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.037710 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs podName:39ef74a4-f27d-498b-8bbd-aae64590d030 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:50.537689099 +0000 UTC m=+37.343746025 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs") pod "network-metrics-daemon-sg8lw" (UID: "39ef74a4-f27d-498b-8bbd-aae64590d030") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.040344 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.056922 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.071030 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvw5f\" (UniqueName: \"kubernetes.io/projected/39ef74a4-f27d-498b-8bbd-aae64590d030-kube-api-access-fvw5f\") pod \"network-metrics-daemon-sg8lw\" (UID: \"39ef74a4-f27d-498b-8bbd-aae64590d030\") " pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.078443 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.093317 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.108332 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.119309 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.119381 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.119404 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.119437 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.119461 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:50Z","lastTransitionTime":"2026-01-31T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.138967 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.139158 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.139239 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:31:06.139199057 +0000 UTC m=+52.945256013 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.139308 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.139379 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 16:31:06.139355322 +0000 UTC m=+52.945412278 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.139305 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.139437 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.139480 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.139501 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.139538 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.139561 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.139609 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.139633 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.139644 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 16:31:06.139618639 +0000 UTC m=+52.945675675 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.139652 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.139723 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 16:31:06.139704972 +0000 UTC m=+52.945761928 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.139862 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.139936 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 16:31:06.139913348 +0000 UTC m=+52.945970364 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.227055 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.227098 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.227110 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.227126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.227137 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:50Z","lastTransitionTime":"2026-01-31T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.301470 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.301534 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.301557 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.301588 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.301611 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:50Z","lastTransitionTime":"2026-01-31T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.322359 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.328161 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.328198 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.328208 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.328225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.328239 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:50Z","lastTransitionTime":"2026-01-31T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.341459 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.345120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.345148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.345161 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.345176 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.345187 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:50Z","lastTransitionTime":"2026-01-31T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.363981 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.368026 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.368084 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.368110 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.368139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.368162 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:50Z","lastTransitionTime":"2026-01-31T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.386002 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.390169 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.390253 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.390272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.390295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.390313 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:50Z","lastTransitionTime":"2026-01-31T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.411838 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:50Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.412072 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.414043 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.414099 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.414117 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.414139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.414157 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:50Z","lastTransitionTime":"2026-01-31T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.417236 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:17:04.327380377 +0000 UTC Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.463924 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.464094 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.464230 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.464409 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.516538 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.516574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.516587 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.516605 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.516616 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:50Z","lastTransitionTime":"2026-01-31T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.544275 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs\") pod \"network-metrics-daemon-sg8lw\" (UID: \"39ef74a4-f27d-498b-8bbd-aae64590d030\") " pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.544415 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 16:30:50 crc kubenswrapper[4730]: E0131 16:30:50.544478 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs podName:39ef74a4-f27d-498b-8bbd-aae64590d030 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:51.544457148 +0000 UTC m=+38.350514074 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs") pod "network-metrics-daemon-sg8lw" (UID: "39ef74a4-f27d-498b-8bbd-aae64590d030") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.620754 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.620878 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.620906 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.620995 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.621025 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:50Z","lastTransitionTime":"2026-01-31T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.724571 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.724633 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.724655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.724682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.724705 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:50Z","lastTransitionTime":"2026-01-31T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.828274 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.828322 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.828340 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.828363 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.828385 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:50Z","lastTransitionTime":"2026-01-31T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.931977 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.932078 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.932103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.932141 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:50 crc kubenswrapper[4730]: I0131 16:30:50.932167 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:50Z","lastTransitionTime":"2026-01-31T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.035477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.035537 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.035555 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.035580 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.035598 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:51Z","lastTransitionTime":"2026-01-31T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.138045 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.138121 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.138146 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.138177 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.138201 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:51Z","lastTransitionTime":"2026-01-31T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.241923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.241979 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.242000 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.242027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.242053 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:51Z","lastTransitionTime":"2026-01-31T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.345995 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.346049 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.346060 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.346081 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.346093 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:51Z","lastTransitionTime":"2026-01-31T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.418379 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:13:09.444160563 +0000 UTC Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.449036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.449097 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.449114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.449139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.449158 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:51Z","lastTransitionTime":"2026-01-31T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.463747 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.463790 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:30:51 crc kubenswrapper[4730]: E0131 16:30:51.463889 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:30:51 crc kubenswrapper[4730]: E0131 16:30:51.463963 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.552570 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.552663 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.552683 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.552739 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.552758 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:51Z","lastTransitionTime":"2026-01-31T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.556078 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs\") pod \"network-metrics-daemon-sg8lw\" (UID: \"39ef74a4-f27d-498b-8bbd-aae64590d030\") " pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:30:51 crc kubenswrapper[4730]: E0131 16:30:51.556281 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 16:30:51 crc kubenswrapper[4730]: E0131 16:30:51.556379 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs podName:39ef74a4-f27d-498b-8bbd-aae64590d030 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:53.556352669 +0000 UTC m=+40.362409625 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs") pod "network-metrics-daemon-sg8lw" (UID: "39ef74a4-f27d-498b-8bbd-aae64590d030") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.656703 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.656770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.656791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.656848 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.656870 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:51Z","lastTransitionTime":"2026-01-31T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.760120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.760195 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.760219 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.760252 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.760276 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:51Z","lastTransitionTime":"2026-01-31T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.863421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.863523 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.863543 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.863567 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.863623 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:51Z","lastTransitionTime":"2026-01-31T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.966954 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.967052 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.967073 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.967104 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:51 crc kubenswrapper[4730]: I0131 16:30:51.967129 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:51Z","lastTransitionTime":"2026-01-31T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.070146 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.070210 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.070229 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.070253 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.070271 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:52Z","lastTransitionTime":"2026-01-31T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.179430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.179492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.179504 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.179520 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.179531 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:52Z","lastTransitionTime":"2026-01-31T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.282894 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.282957 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.282975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.282999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.283017 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:52Z","lastTransitionTime":"2026-01-31T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.386266 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.386326 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.386346 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.386373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.386390 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:52Z","lastTransitionTime":"2026-01-31T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.418790 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 06:08:59.170959632 +0000 UTC Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.464140 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.464467 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:52 crc kubenswrapper[4730]: E0131 16:30:52.464624 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:30:52 crc kubenswrapper[4730]: E0131 16:30:52.464904 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.488657 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.488701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.488718 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.488739 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.488756 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:52Z","lastTransitionTime":"2026-01-31T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.592095 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.592137 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.592148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.592164 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.592176 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:52Z","lastTransitionTime":"2026-01-31T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.695236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.695288 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.695305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.695327 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.695343 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:52Z","lastTransitionTime":"2026-01-31T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.798102 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.798149 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.798160 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.798177 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.798191 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:52Z","lastTransitionTime":"2026-01-31T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.901080 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.901140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.901158 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.901183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:52 crc kubenswrapper[4730]: I0131 16:30:52.901202 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:52Z","lastTransitionTime":"2026-01-31T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.004363 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.004420 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.004441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.004465 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.004485 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:53Z","lastTransitionTime":"2026-01-31T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.107506 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.107898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.108099 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.108342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.108533 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:53Z","lastTransitionTime":"2026-01-31T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.211982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.212404 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.212547 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.212690 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.212916 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:53Z","lastTransitionTime":"2026-01-31T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.316854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.316919 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.316938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.316962 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.316979 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:53Z","lastTransitionTime":"2026-01-31T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.418968 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 07:10:42.547053647 +0000 UTC Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.420501 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.420566 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.420585 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.420610 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.420631 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:53Z","lastTransitionTime":"2026-01-31T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.464073 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.464135 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:53 crc kubenswrapper[4730]: E0131 16:30:53.464325 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:30:53 crc kubenswrapper[4730]: E0131 16:30:53.464461 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.523362 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.523395 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.523406 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.523421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.523433 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:53Z","lastTransitionTime":"2026-01-31T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.578510 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs\") pod \"network-metrics-daemon-sg8lw\" (UID: \"39ef74a4-f27d-498b-8bbd-aae64590d030\") " pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:30:53 crc kubenswrapper[4730]: E0131 16:30:53.578747 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 16:30:53 crc kubenswrapper[4730]: E0131 16:30:53.578862 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs podName:39ef74a4-f27d-498b-8bbd-aae64590d030 nodeName:}" failed. No retries permitted until 2026-01-31 16:30:57.578838035 +0000 UTC m=+44.384894961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs") pod "network-metrics-daemon-sg8lw" (UID: "39ef74a4-f27d-498b-8bbd-aae64590d030") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.626912 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.627003 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.627031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.627064 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.627091 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:53Z","lastTransitionTime":"2026-01-31T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.729840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.730092 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.730188 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.730297 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.730421 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:53Z","lastTransitionTime":"2026-01-31T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.740866 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.754524 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:53Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.771295 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:53Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.786014 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:53Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.799579 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:53Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.815979 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:53Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.833990 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.834079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.834097 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.834120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.834137 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:53Z","lastTransitionTime":"2026-01-31T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.834725 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:53Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.851773 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:53Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.865841 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:53Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.882429 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:53Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.895773 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:53Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.914141 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:53Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.932602 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:53Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.937007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.937074 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.937088 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.937127 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.937142 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:53Z","lastTransitionTime":"2026-01-31T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.946359 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:53Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.959075 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:53Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.978783 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:53Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:53 crc kubenswrapper[4730]: I0131 16:30:53.998251 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"message\\\":\\\"96 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489919 6096 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489965 6096 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490003 6096 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490001 6096 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 16:30:47.496655 6096 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:30:47.496698 6096 factory.go:656] Stopping watch factory\\\\nI0131 16:30:47.496717 6096 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:30:47.496873 6096 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 16:30:47.515064 6096 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:30:47.515087 6096 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:30:47.515150 6096 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:30:47.515181 6096 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:30:47.515256 6096 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:53Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.039583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.039854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.039997 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.040134 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.040266 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:54Z","lastTransitionTime":"2026-01-31T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.144023 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.144438 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.144620 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.144762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.144927 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:54Z","lastTransitionTime":"2026-01-31T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.248346 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.248605 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.248771 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.248952 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.249085 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:54Z","lastTransitionTime":"2026-01-31T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.352490 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.352755 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.352938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.353112 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.353242 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:54Z","lastTransitionTime":"2026-01-31T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.419487 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 15:47:12.46887122 +0000 UTC Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.455467 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.455725 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.455903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.456077 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.456258 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:54Z","lastTransitionTime":"2026-01-31T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.463932 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.463973 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:54 crc kubenswrapper[4730]: E0131 16:30:54.464154 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:30:54 crc kubenswrapper[4730]: E0131 16:30:54.464204 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.478283 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.499315 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.513858 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.544892 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"message\\\":\\\"96 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489919 6096 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489965 6096 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490003 6096 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490001 6096 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 16:30:47.496655 6096 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:30:47.496698 6096 factory.go:656] Stopping watch factory\\\\nI0131 16:30:47.496717 6096 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:30:47.496873 6096 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 16:30:47.515064 6096 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:30:47.515087 6096 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:30:47.515150 6096 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:30:47.515181 6096 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:30:47.515256 6096 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.558989 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.559059 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.559071 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.559120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.559134 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:54Z","lastTransitionTime":"2026-01-31T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.571006 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.591076 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.608584 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.632640 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.653845 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.662218 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.662490 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.662640 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.662842 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.662986 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:54Z","lastTransitionTime":"2026-01-31T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.672668 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.690135 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.704528 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.725477 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.737665 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.752596 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.765884 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.765940 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.765956 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.765981 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.766001 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:54Z","lastTransitionTime":"2026-01-31T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.773004 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:30:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.869292 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.869345 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.869363 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.869386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.869405 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:54Z","lastTransitionTime":"2026-01-31T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.972131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.972168 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.972176 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.972190 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:54 crc kubenswrapper[4730]: I0131 16:30:54.972200 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:54Z","lastTransitionTime":"2026-01-31T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.074314 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.074348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.074356 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.074368 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.074376 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:55Z","lastTransitionTime":"2026-01-31T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.181898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.181955 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.181971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.181993 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.182011 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:55Z","lastTransitionTime":"2026-01-31T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.284650 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.284720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.284743 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.284777 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.284843 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:55Z","lastTransitionTime":"2026-01-31T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.387311 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.387368 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.387385 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.387407 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.387424 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:55Z","lastTransitionTime":"2026-01-31T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.421304 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 16:02:28.293368537 +0000 UTC Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.464184 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.464214 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:30:55 crc kubenswrapper[4730]: E0131 16:30:55.464369 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:30:55 crc kubenswrapper[4730]: E0131 16:30:55.464494 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.490087 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.490150 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.490169 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.490193 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.490211 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:55Z","lastTransitionTime":"2026-01-31T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.593862 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.593932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.593955 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.593986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.594008 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:55Z","lastTransitionTime":"2026-01-31T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.697258 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.697316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.697335 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.697358 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.697376 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:55Z","lastTransitionTime":"2026-01-31T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.799792 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.799903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.799926 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.799957 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.799983 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:55Z","lastTransitionTime":"2026-01-31T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.903762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.903847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.903864 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.903888 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:55 crc kubenswrapper[4730]: I0131 16:30:55.903905 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:55Z","lastTransitionTime":"2026-01-31T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.007473 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.007510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.007519 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.007553 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.007564 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:56Z","lastTransitionTime":"2026-01-31T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.110197 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.110260 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.110278 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.110303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.110322 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:56Z","lastTransitionTime":"2026-01-31T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.212834 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.212908 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.212963 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.213020 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.213107 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:56Z","lastTransitionTime":"2026-01-31T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.316062 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.316106 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.316115 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.316130 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.316141 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:56Z","lastTransitionTime":"2026-01-31T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.418459 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.418511 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.418522 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.418539 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.418550 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:56Z","lastTransitionTime":"2026-01-31T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.422580 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 14:45:08.459236577 +0000 UTC Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.463269 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.463355 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:56 crc kubenswrapper[4730]: E0131 16:30:56.463514 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:30:56 crc kubenswrapper[4730]: E0131 16:30:56.463596 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.520477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.520510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.520519 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.520531 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.520541 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:56Z","lastTransitionTime":"2026-01-31T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.622875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.622937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.623003 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.623031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.623094 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:56Z","lastTransitionTime":"2026-01-31T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.727878 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.727923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.727940 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.727962 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.727979 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:56Z","lastTransitionTime":"2026-01-31T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.830178 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.830248 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.830266 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.830290 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.830308 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:56Z","lastTransitionTime":"2026-01-31T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.932955 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.933006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.933017 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.933037 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:56 crc kubenswrapper[4730]: I0131 16:30:56.933051 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:56Z","lastTransitionTime":"2026-01-31T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.035530 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.035605 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.035630 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.035660 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.035679 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:57Z","lastTransitionTime":"2026-01-31T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.138587 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.138671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.138701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.138733 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.138758 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:57Z","lastTransitionTime":"2026-01-31T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.241351 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.241410 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.241427 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.241448 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.241465 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:57Z","lastTransitionTime":"2026-01-31T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.344072 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.344149 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.344167 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.344192 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.344209 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:57Z","lastTransitionTime":"2026-01-31T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.423081 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 12:36:06.057939665 +0000 UTC Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.447012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.447089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.447113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.447146 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.447171 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:57Z","lastTransitionTime":"2026-01-31T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.463486 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.463527 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:30:57 crc kubenswrapper[4730]: E0131 16:30:57.463846 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:30:57 crc kubenswrapper[4730]: E0131 16:30:57.463987 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.549595 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.549650 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.549668 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.549692 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.549709 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:57Z","lastTransitionTime":"2026-01-31T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.627099 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs\") pod \"network-metrics-daemon-sg8lw\" (UID: \"39ef74a4-f27d-498b-8bbd-aae64590d030\") " pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:30:57 crc kubenswrapper[4730]: E0131 16:30:57.627327 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 16:30:57 crc kubenswrapper[4730]: E0131 16:30:57.627421 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs podName:39ef74a4-f27d-498b-8bbd-aae64590d030 nodeName:}" failed. No retries permitted until 2026-01-31 16:31:05.62739643 +0000 UTC m=+52.433453366 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs") pod "network-metrics-daemon-sg8lw" (UID: "39ef74a4-f27d-498b-8bbd-aae64590d030") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.652753 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.652785 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.652795 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.652867 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.652886 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:57Z","lastTransitionTime":"2026-01-31T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.756094 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.756159 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.756181 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.756208 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.756230 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:57Z","lastTransitionTime":"2026-01-31T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.859061 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.859101 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.859109 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.859124 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.859133 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:57Z","lastTransitionTime":"2026-01-31T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.961032 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.961078 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.961091 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.961106 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:57 crc kubenswrapper[4730]: I0131 16:30:57.961116 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:57Z","lastTransitionTime":"2026-01-31T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.063845 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.063874 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.063882 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.063894 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.063902 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:58Z","lastTransitionTime":"2026-01-31T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.167448 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.167507 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.167526 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.167548 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.167570 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:58Z","lastTransitionTime":"2026-01-31T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.270648 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.270695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.270713 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.270735 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.270749 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:58Z","lastTransitionTime":"2026-01-31T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.373593 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.373629 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.373637 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.373650 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.373659 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:58Z","lastTransitionTime":"2026-01-31T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.423714 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:12:42.218347092 +0000 UTC Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.463873 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.463937 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:30:58 crc kubenswrapper[4730]: E0131 16:30:58.464024 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:30:58 crc kubenswrapper[4730]: E0131 16:30:58.464146 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.480258 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.480289 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.480298 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.480310 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.480322 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:58Z","lastTransitionTime":"2026-01-31T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.582494 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.582520 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.582530 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.582544 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.582554 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:58Z","lastTransitionTime":"2026-01-31T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.684967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.684999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.685010 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.685024 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.685034 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:58Z","lastTransitionTime":"2026-01-31T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.787908 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.787941 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.787952 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.787968 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.787979 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:58Z","lastTransitionTime":"2026-01-31T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.891239 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.891296 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.891314 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.891336 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.891351 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:58Z","lastTransitionTime":"2026-01-31T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.994199 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.994247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.994268 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.994297 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:58 crc kubenswrapper[4730]: I0131 16:30:58.994317 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:58Z","lastTransitionTime":"2026-01-31T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.097516 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.097578 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.097602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.097628 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.097647 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:59Z","lastTransitionTime":"2026-01-31T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.200629 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.200724 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.200743 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.200766 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.200782 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:59Z","lastTransitionTime":"2026-01-31T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.303569 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.303620 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.303644 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.303671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.303694 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:59Z","lastTransitionTime":"2026-01-31T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.406848 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.406886 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.406895 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.406909 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.406919 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:59Z","lastTransitionTime":"2026-01-31T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.424675 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 09:52:34.35713801 +0000 UTC Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.463188 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.463231 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:30:59 crc kubenswrapper[4730]: E0131 16:30:59.463306 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:30:59 crc kubenswrapper[4730]: E0131 16:30:59.463418 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.510383 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.510438 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.510454 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.510475 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.510494 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:59Z","lastTransitionTime":"2026-01-31T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.613251 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.613304 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.613322 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.613345 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.613361 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:59Z","lastTransitionTime":"2026-01-31T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.716415 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.716485 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.716507 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.716536 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.716558 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:59Z","lastTransitionTime":"2026-01-31T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.819048 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.819113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.819134 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.819161 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.819178 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:59Z","lastTransitionTime":"2026-01-31T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.921861 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.922001 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.922019 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.922040 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:30:59 crc kubenswrapper[4730]: I0131 16:30:59.922053 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:30:59Z","lastTransitionTime":"2026-01-31T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.024972 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.025004 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.025013 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.025024 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.025034 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:00Z","lastTransitionTime":"2026-01-31T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.127716 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.127786 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.127841 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.127871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.127896 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:00Z","lastTransitionTime":"2026-01-31T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.230354 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.230407 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.230423 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.230444 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.230460 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:00Z","lastTransitionTime":"2026-01-31T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.332682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.332780 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.332799 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.332863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.332883 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:00Z","lastTransitionTime":"2026-01-31T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.424836 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:42:21.154743222 +0000 UTC Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.435180 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.435209 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.435217 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.435228 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.435252 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:00Z","lastTransitionTime":"2026-01-31T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.464266 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:00 crc kubenswrapper[4730]: E0131 16:31:00.464438 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.464905 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:00 crc kubenswrapper[4730]: E0131 16:31:00.465018 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.537891 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.537938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.537951 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.537972 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.537987 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:00Z","lastTransitionTime":"2026-01-31T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.640337 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.640376 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.640385 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.640398 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.640409 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:00Z","lastTransitionTime":"2026-01-31T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.653798 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.653931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.653951 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.653977 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.653995 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:00Z","lastTransitionTime":"2026-01-31T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:00 crc kubenswrapper[4730]: E0131 16:31:00.674766 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:00Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.678919 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.678955 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.678968 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.678987 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.679000 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:00Z","lastTransitionTime":"2026-01-31T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:00 crc kubenswrapper[4730]: E0131 16:31:00.698678 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:00Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.703514 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.703572 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.703594 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.703615 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.703630 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:00Z","lastTransitionTime":"2026-01-31T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:00 crc kubenswrapper[4730]: E0131 16:31:00.730847 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:00Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.734840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.734887 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.734900 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.734927 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.734942 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:00Z","lastTransitionTime":"2026-01-31T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:00 crc kubenswrapper[4730]: E0131 16:31:00.752540 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:00Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.756945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.757007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.757025 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.757049 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.757070 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:00Z","lastTransitionTime":"2026-01-31T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:00 crc kubenswrapper[4730]: E0131 16:31:00.773957 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:00Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:00 crc kubenswrapper[4730]: E0131 16:31:00.774193 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.775787 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.775881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.775919 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.775940 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.775955 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:00Z","lastTransitionTime":"2026-01-31T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.879017 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.879070 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.879087 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.879122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.879139 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:00Z","lastTransitionTime":"2026-01-31T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.981971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.982313 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.982629 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.982692 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:00 crc kubenswrapper[4730]: I0131 16:31:00.982712 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:00Z","lastTransitionTime":"2026-01-31T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.085947 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.085993 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.086009 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.086036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.086053 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:01Z","lastTransitionTime":"2026-01-31T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.189759 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.189853 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.189872 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.189895 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.189914 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:01Z","lastTransitionTime":"2026-01-31T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.292412 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.292456 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.292476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.292497 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.292513 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:01Z","lastTransitionTime":"2026-01-31T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.394953 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.395010 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.395029 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.395053 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.395071 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:01Z","lastTransitionTime":"2026-01-31T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.425855 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 01:13:05.090809704 +0000 UTC Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.463331 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.463407 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:01 crc kubenswrapper[4730]: E0131 16:31:01.463505 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:01 crc kubenswrapper[4730]: E0131 16:31:01.463980 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.497208 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.497272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.497295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.497329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.497353 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:01Z","lastTransitionTime":"2026-01-31T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.600275 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.600334 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.600351 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.600374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.600391 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:01Z","lastTransitionTime":"2026-01-31T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.703229 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.703282 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.703299 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.703319 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.703335 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:01Z","lastTransitionTime":"2026-01-31T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.805951 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.806016 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.806122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.806150 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.806167 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:01Z","lastTransitionTime":"2026-01-31T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.908672 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.909030 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.909051 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.909075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.909092 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:01Z","lastTransitionTime":"2026-01-31T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.986127 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 16:31:01 crc kubenswrapper[4730]: I0131 16:31:01.996906 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.012386 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.012541 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.012580 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.012632 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.012655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.012674 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:02Z","lastTransitionTime":"2026-01-31T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.029978 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.049897 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.069390 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.087411 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.105345 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.115654 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.115976 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.116131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.116271 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.116415 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:02Z","lastTransitionTime":"2026-01-31T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.121871 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.133426 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.150835 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.166970 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.183535 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.205563 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.219200 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.219464 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.219599 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.219726 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.219883 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:02Z","lastTransitionTime":"2026-01-31T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.226553 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.243503 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.257369 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.284011 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"message\\\":\\\"96 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489919 6096 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489965 6096 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490003 6096 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490001 6096 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 16:30:47.496655 6096 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:30:47.496698 6096 factory.go:656] Stopping watch factory\\\\nI0131 16:30:47.496717 6096 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:30:47.496873 6096 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 16:30:47.515064 6096 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:30:47.515087 6096 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:30:47.515150 6096 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:30:47.515181 6096 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:30:47.515256 6096 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.328258 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.328320 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.328338 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.328363 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.328382 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:02Z","lastTransitionTime":"2026-01-31T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.426291 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 08:43:41.446126967 +0000 UTC Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.432144 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.432190 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.432206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.432228 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.432245 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:02Z","lastTransitionTime":"2026-01-31T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.464755 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:02 crc kubenswrapper[4730]: E0131 16:31:02.464912 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.465143 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:02 crc kubenswrapper[4730]: E0131 16:31:02.465200 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.465203 4730 scope.go:117] "RemoveContainer" containerID="668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.535465 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.535729 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.535740 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.535760 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.535772 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:02Z","lastTransitionTime":"2026-01-31T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.638783 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.638893 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.638913 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.638938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.638957 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:02Z","lastTransitionTime":"2026-01-31T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.742117 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.742155 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.742166 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.742182 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.742195 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:02Z","lastTransitionTime":"2026-01-31T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.787540 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovnkube-controller/1.log" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.791855 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerStarted","Data":"529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92"} Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.792479 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.811021 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.828400 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.843150 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.845223 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.845263 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.845276 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.845296 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.845333 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:02Z","lastTransitionTime":"2026-01-31T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.863051 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.891162 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"message\\\":\\\"96 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489919 6096 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489965 6096 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490003 6096 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490001 6096 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 16:30:47.496655 6096 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:30:47.496698 6096 factory.go:656] Stopping watch factory\\\\nI0131 16:30:47.496717 6096 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:30:47.496873 6096 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 16:30:47.515064 6096 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:30:47.515087 6096 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:30:47.515150 6096 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:30:47.515181 6096 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:30:47.515256 6096 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:31:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.910179 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.925003 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.942770 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.947318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.947355 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.947367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.947384 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.947397 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:02Z","lastTransitionTime":"2026-01-31T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.960914 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.976927 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53612900-51fd-4d01-9a6f-bc9a3c252f3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c8d849c5465966f2f594e26b08dfd9894c2f0337bba1e90085896ab8d8c5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71180a847d6310a8c7bc6f33e0d092316b4927684618237542ff99951cc4bb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab70a9385676283881a5e8581eea0d5dc9f7a467b10e66ca34dc25efce6c712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:02 crc kubenswrapper[4730]: I0131 16:31:02.997952 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.017409 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:03Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.037000 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:03Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.049506 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.049534 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.049542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.049555 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.049564 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:03Z","lastTransitionTime":"2026-01-31T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.052607 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:03Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.078961 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:03Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.090595 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:03Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.102210 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:03Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.151939 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.151972 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.151983 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.151997 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.152007 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:03Z","lastTransitionTime":"2026-01-31T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.253886 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.253923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.253932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.253945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.253954 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:03Z","lastTransitionTime":"2026-01-31T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.356253 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.356301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.356313 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.356331 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.356343 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:03Z","lastTransitionTime":"2026-01-31T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.426541 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 00:50:53.434633427 +0000 UTC Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.458440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.458493 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.458512 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.458534 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.458553 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:03Z","lastTransitionTime":"2026-01-31T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.463962 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.464007 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:03 crc kubenswrapper[4730]: E0131 16:31:03.464080 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:03 crc kubenswrapper[4730]: E0131 16:31:03.464185 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.560383 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.560427 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.560441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.560468 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.560491 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:03Z","lastTransitionTime":"2026-01-31T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.663531 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.663574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.663586 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.663601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.663612 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:03Z","lastTransitionTime":"2026-01-31T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.765507 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.765546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.765556 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.765570 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.765579 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:03Z","lastTransitionTime":"2026-01-31T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.797515 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovnkube-controller/2.log" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.798296 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovnkube-controller/1.log" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.801737 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerID="529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92" exitCode=1 Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.801843 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerDied","Data":"529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92"} Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.801922 4730 scope.go:117] "RemoveContainer" containerID="668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.802375 4730 scope.go:117] "RemoveContainer" containerID="529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92" Jan 31 16:31:03 crc kubenswrapper[4730]: E0131 16:31:03.802515 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.817507 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:03Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.839901 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:03Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.859075 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:03Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.868513 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.868537 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.868545 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.868556 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.868566 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:03Z","lastTransitionTime":"2026-01-31T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.876681 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53612900-51fd-4d01-9a6f-bc9a3c252f3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c8d849c5465966f2f594e26b08dfd9894c2f0337bba1e90085896ab8d8c5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71180a847d6310a8c7bc6f33e0d092316b4927684618237542ff99951cc4bb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab70a9385676283881a5e8581eea0d5dc9f7a467b10e66ca34dc25efce6c712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:03Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.937509 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:03Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.959344 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:03Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.975307 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.975625 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.975747 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.975930 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.976053 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:03Z","lastTransitionTime":"2026-01-31T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:03 crc kubenswrapper[4730]: I0131 16:31:03.996488 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:03Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.012252 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.029185 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.051063 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.068533 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.078663 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.078698 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.078709 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.078723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.078733 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:04Z","lastTransitionTime":"2026-01-31T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.088040 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.109030 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.127335 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.145860 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.166102 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.181582 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.181612 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.181623 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.181635 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.181643 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:04Z","lastTransitionTime":"2026-01-31T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.199543 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"message\\\":\\\"96 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489919 6096 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489965 6096 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490003 6096 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490001 6096 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 16:30:47.496655 6096 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:30:47.496698 6096 factory.go:656] Stopping watch factory\\\\nI0131 16:30:47.496717 6096 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:30:47.496873 6096 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 16:30:47.515064 6096 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:30:47.515087 6096 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:30:47.515150 6096 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:30:47.515181 6096 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:30:47.515256 6096 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:03Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.339877 6286 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:31:03.339965 6286 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:31:03.339880 6286 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.340366 6286 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.340768 6286 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.346151 6286 factory.go:656] Stopping watch factory\\\\nI0131 16:31:03.428737 6286 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:31:03.428776 6286 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:31:03.428925 6286 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:31:03.428968 6286 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:31:03.429081 6286 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:31:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.284643 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.284707 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.284723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.284750 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.284769 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:04Z","lastTransitionTime":"2026-01-31T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.387979 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.388037 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.388053 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.388075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.388093 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:04Z","lastTransitionTime":"2026-01-31T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.427443 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:46:55.538731429 +0000 UTC Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.463515 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:04 crc kubenswrapper[4730]: E0131 16:31:04.463684 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.464291 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:04 crc kubenswrapper[4730]: E0131 16:31:04.464555 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.481923 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.490997 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.491054 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.491071 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.491094 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.491111 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:04Z","lastTransitionTime":"2026-01-31T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.515143 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668982d01dea0ef230cacfa21f9999333a457a2d04e311e5c172300d656eafdb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"message\\\":\\\"96 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489919 6096 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.489965 6096 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490003 6096 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:30:47.490001 6096 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 16:30:47.496655 6096 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:30:47.496698 6096 factory.go:656] Stopping watch factory\\\\nI0131 16:30:47.496717 6096 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:30:47.496873 6096 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 16:30:47.515064 6096 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:30:47.515087 6096 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:30:47.515150 6096 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:30:47.515181 6096 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:30:47.515256 6096 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:03Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.339877 6286 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:31:03.339965 6286 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:31:03.339880 6286 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.340366 6286 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.340768 6286 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.346151 6286 factory.go:656] Stopping watch factory\\\\nI0131 16:31:03.428737 6286 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:31:03.428776 6286 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:31:03.428925 6286 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:31:03.428968 6286 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:31:03.429081 6286 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:31:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.542038 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.571447 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.590036 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53612900-51fd-4d01-9a6f-bc9a3c252f3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c8d849c5465966f2f594e26b08dfd9894c2f0337bba1e90085896ab8d8c5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71180a847d6310a8c7bc6f33e0d092316b4927684618237542ff99951cc4bb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab70a9385676283881a5e8581eea0d5dc9f7a467b10e66ca34dc25efce6c712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.594384 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.594538 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.594567 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.594597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.594618 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:04Z","lastTransitionTime":"2026-01-31T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.611364 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.631093 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.652848 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.672976 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.692486 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.698868 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.698930 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.698947 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.698970 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.698988 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:04Z","lastTransitionTime":"2026-01-31T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.714599 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.734527 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.760046 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.771465 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.783086 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.792940 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.800690 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.800767 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.800790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.800878 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.800948 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:04Z","lastTransitionTime":"2026-01-31T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.803184 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.805460 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovnkube-controller/2.log" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.809180 4730 scope.go:117] "RemoveContainer" containerID="529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92" Jan 31 16:31:04 crc kubenswrapper[4730]: E0131 16:31:04.809399 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.827527 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:03Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.339877 6286 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:31:03.339965 6286 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:31:03.339880 6286 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.340366 6286 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.340768 6286 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.346151 6286 factory.go:656] Stopping watch factory\\\\nI0131 16:31:03.428737 6286 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:31:03.428776 6286 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:31:03.428925 6286 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:31:03.428968 6286 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:31:03.429081 6286 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:31:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.839455 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.851096 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53612900-51fd-4d01-9a6f-bc9a3c252f3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c8d849c5465966f2f594e26b08dfd9894c2f0337bba1e90085896ab8d8c5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71180a847d6310a8c7bc6f33e0d092316b4927684618237542ff99951cc4bb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab70a9385676283881a5e8581eea0d5dc9f7a467b10e66ca34dc25efce6c712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.863657 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.874794 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.888680 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.898609 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.903193 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.903243 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.903252 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.903266 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.903275 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:04Z","lastTransitionTime":"2026-01-31T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.910785 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.922460 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.934685 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.948475 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.961168 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.976308 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.987215 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:04 crc kubenswrapper[4730]: I0131 16:31:04.994143 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:04Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.004313 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.006044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.006101 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.006115 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.006133 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.006153 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:05Z","lastTransitionTime":"2026-01-31T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.016037 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:05Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.108196 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.108226 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.108235 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.108247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.108257 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:05Z","lastTransitionTime":"2026-01-31T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.210088 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.210143 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.210160 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.210182 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.210200 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:05Z","lastTransitionTime":"2026-01-31T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.312723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.312760 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.312771 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.312786 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.312819 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:05Z","lastTransitionTime":"2026-01-31T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.415167 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.415194 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.415203 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.415213 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.415222 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:05Z","lastTransitionTime":"2026-01-31T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.428136 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 05:56:31.884004459 +0000 UTC Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.463578 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:05 crc kubenswrapper[4730]: E0131 16:31:05.463678 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.463578 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:05 crc kubenswrapper[4730]: E0131 16:31:05.463764 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.517898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.517956 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.517973 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.517996 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.518017 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:05Z","lastTransitionTime":"2026-01-31T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.620172 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.620226 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.620244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.620267 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.620287 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:05Z","lastTransitionTime":"2026-01-31T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.651104 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs\") pod \"network-metrics-daemon-sg8lw\" (UID: \"39ef74a4-f27d-498b-8bbd-aae64590d030\") " pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:05 crc kubenswrapper[4730]: E0131 16:31:05.651250 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 16:31:05 crc kubenswrapper[4730]: E0131 16:31:05.651308 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs podName:39ef74a4-f27d-498b-8bbd-aae64590d030 nodeName:}" failed. No retries permitted until 2026-01-31 16:31:21.651290373 +0000 UTC m=+68.457347279 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs") pod "network-metrics-daemon-sg8lw" (UID: "39ef74a4-f27d-498b-8bbd-aae64590d030") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.722923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.722973 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.722989 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.723011 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.723028 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:05Z","lastTransitionTime":"2026-01-31T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.825400 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.825443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.825453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.825468 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.825479 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:05Z","lastTransitionTime":"2026-01-31T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.928175 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.928242 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.928262 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.928294 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:05 crc kubenswrapper[4730]: I0131 16:31:05.928318 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:05Z","lastTransitionTime":"2026-01-31T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.030571 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.030638 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.030651 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.030668 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.030680 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:06Z","lastTransitionTime":"2026-01-31T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.133264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.133290 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.133297 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.133309 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.133319 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:06Z","lastTransitionTime":"2026-01-31T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.154197 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.154345 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.154386 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.154418 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.154456 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:06 crc kubenswrapper[4730]: E0131 16:31:06.154617 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 16:31:06 crc kubenswrapper[4730]: E0131 16:31:06.154641 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 16:31:06 crc kubenswrapper[4730]: E0131 16:31:06.154660 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:31:06 crc kubenswrapper[4730]: E0131 16:31:06.154716 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 16:31:38.154695826 +0000 UTC m=+84.960752782 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:31:06 crc kubenswrapper[4730]: E0131 16:31:06.154992 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 16:31:06 crc kubenswrapper[4730]: E0131 16:31:06.155085 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 16:31:06 crc kubenswrapper[4730]: E0131 16:31:06.155094 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:31:06 crc kubenswrapper[4730]: E0131 16:31:06.155246 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:31:38.155235461 +0000 UTC m=+84.961292367 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:31:06 crc kubenswrapper[4730]: E0131 16:31:06.155282 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 16:31:38.155276103 +0000 UTC m=+84.961333019 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:31:06 crc kubenswrapper[4730]: E0131 16:31:06.155316 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 16:31:06 crc kubenswrapper[4730]: E0131 16:31:06.155333 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 16:31:06 crc kubenswrapper[4730]: E0131 16:31:06.155343 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 16:31:38.155337124 +0000 UTC m=+84.961394040 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 16:31:06 crc kubenswrapper[4730]: E0131 16:31:06.155429 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 16:31:38.155407356 +0000 UTC m=+84.961464312 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.236984 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.237093 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.237112 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.237139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.237156 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:06Z","lastTransitionTime":"2026-01-31T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.340321 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.340794 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.341002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.341147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.341269 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:06Z","lastTransitionTime":"2026-01-31T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.428755 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:03:17.047711876 +0000 UTC Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.444356 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.444462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.444486 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.444509 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.444527 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:06Z","lastTransitionTime":"2026-01-31T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.463475 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:06 crc kubenswrapper[4730]: E0131 16:31:06.463578 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.464480 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:06 crc kubenswrapper[4730]: E0131 16:31:06.464766 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.547007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.547068 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.547087 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.547111 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.547130 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:06Z","lastTransitionTime":"2026-01-31T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.650071 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.650405 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.650546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.650770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.650999 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:06Z","lastTransitionTime":"2026-01-31T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.754295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.754647 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.754795 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.755066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.755344 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:06Z","lastTransitionTime":"2026-01-31T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.858502 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.858955 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.858976 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.859000 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.859017 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:06Z","lastTransitionTime":"2026-01-31T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.961671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.961730 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.961748 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.961772 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:06 crc kubenswrapper[4730]: I0131 16:31:06.961788 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:06Z","lastTransitionTime":"2026-01-31T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.064621 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.064674 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.064686 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.064703 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.064713 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:07Z","lastTransitionTime":"2026-01-31T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.167341 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.167401 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.167418 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.167441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.167479 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:07Z","lastTransitionTime":"2026-01-31T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.271533 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.271839 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.271923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.272031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.272126 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:07Z","lastTransitionTime":"2026-01-31T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.374680 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.374741 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.374758 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.374781 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.374798 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:07Z","lastTransitionTime":"2026-01-31T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.430174 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 00:14:56.273616699 +0000 UTC Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.463680 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:07 crc kubenswrapper[4730]: E0131 16:31:07.463929 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.464469 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:07 crc kubenswrapper[4730]: E0131 16:31:07.464602 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.477427 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.477710 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.477913 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.478066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.478193 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:07Z","lastTransitionTime":"2026-01-31T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.580986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.581045 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.581062 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.581084 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.581101 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:07Z","lastTransitionTime":"2026-01-31T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.684586 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.684682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.684700 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.684725 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.684742 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:07Z","lastTransitionTime":"2026-01-31T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.787716 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.788054 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.788143 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.788236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.788323 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:07Z","lastTransitionTime":"2026-01-31T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.890942 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.891006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.891023 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.891049 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.891067 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:07Z","lastTransitionTime":"2026-01-31T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.993431 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.993477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.993487 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.993507 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:07 crc kubenswrapper[4730]: I0131 16:31:07.993519 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:07Z","lastTransitionTime":"2026-01-31T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.097095 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.097172 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.097196 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.097225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.097243 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:08Z","lastTransitionTime":"2026-01-31T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.200776 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.200935 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.200953 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.200976 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.200993 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:08Z","lastTransitionTime":"2026-01-31T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.303887 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.303941 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.303958 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.303981 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.303997 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:08Z","lastTransitionTime":"2026-01-31T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.406913 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.407009 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.407031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.407057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.407074 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:08Z","lastTransitionTime":"2026-01-31T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.431453 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 04:33:51.4289502 +0000 UTC Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.463949 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.463956 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:08 crc kubenswrapper[4730]: E0131 16:31:08.464130 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:08 crc kubenswrapper[4730]: E0131 16:31:08.464294 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.510145 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.510265 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.510342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.510378 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.510402 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:08Z","lastTransitionTime":"2026-01-31T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.613436 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.613488 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.613505 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.613528 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.613545 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:08Z","lastTransitionTime":"2026-01-31T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.716062 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.716123 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.716140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.716165 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.716183 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:08Z","lastTransitionTime":"2026-01-31T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.818847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.818888 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.818900 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.818915 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.818926 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:08Z","lastTransitionTime":"2026-01-31T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.921711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.921774 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.921790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.921853 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:08 crc kubenswrapper[4730]: I0131 16:31:08.921875 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:08Z","lastTransitionTime":"2026-01-31T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.025122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.025183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.025200 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.025223 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.025241 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:09Z","lastTransitionTime":"2026-01-31T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.128726 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.128773 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.128785 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.128820 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.128833 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:09Z","lastTransitionTime":"2026-01-31T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.232916 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.232985 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.233002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.233027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.233045 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:09Z","lastTransitionTime":"2026-01-31T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.336539 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.336598 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.336616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.336642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.336667 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:09Z","lastTransitionTime":"2026-01-31T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.431866 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 09:09:54.808805798 +0000 UTC Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.439535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.439598 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.439619 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.439644 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.439662 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:09Z","lastTransitionTime":"2026-01-31T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.463854 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:09 crc kubenswrapper[4730]: E0131 16:31:09.464031 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.464062 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:09 crc kubenswrapper[4730]: E0131 16:31:09.464223 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.541976 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.542015 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.542028 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.542047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.542059 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:09Z","lastTransitionTime":"2026-01-31T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.645011 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.645076 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.645090 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.645115 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.645131 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:09Z","lastTransitionTime":"2026-01-31T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.748580 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.748625 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.748637 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.748655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.748672 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:09Z","lastTransitionTime":"2026-01-31T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.854871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.854905 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.854915 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.854937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.854948 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:09Z","lastTransitionTime":"2026-01-31T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.961657 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.961721 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.962084 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.962148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:09 crc kubenswrapper[4730]: I0131 16:31:09.962260 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:09Z","lastTransitionTime":"2026-01-31T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.066349 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.066423 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.066444 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.066477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.066500 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:10Z","lastTransitionTime":"2026-01-31T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.170249 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.170326 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.170344 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.170365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.170382 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:10Z","lastTransitionTime":"2026-01-31T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.274185 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.274247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.274264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.274286 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.274304 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:10Z","lastTransitionTime":"2026-01-31T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.378332 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.378397 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.378414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.378439 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.378461 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:10Z","lastTransitionTime":"2026-01-31T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.432942 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:57:58.127971039 +0000 UTC Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.463340 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.463587 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:10 crc kubenswrapper[4730]: E0131 16:31:10.463866 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:10 crc kubenswrapper[4730]: E0131 16:31:10.463982 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.481472 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.481520 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.481540 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.481561 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.481578 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:10Z","lastTransitionTime":"2026-01-31T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.584774 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.584880 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.584899 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.584924 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.584942 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:10Z","lastTransitionTime":"2026-01-31T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.688659 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.688755 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.688776 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.688801 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.688849 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:10Z","lastTransitionTime":"2026-01-31T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.795147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.795213 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.795236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.795265 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.795287 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:10Z","lastTransitionTime":"2026-01-31T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.827435 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.827537 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.827562 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.827628 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.827655 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:10Z","lastTransitionTime":"2026-01-31T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:10 crc kubenswrapper[4730]: E0131 16:31:10.853164 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:10Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.858919 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.858982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.859006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.859031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.859051 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:10Z","lastTransitionTime":"2026-01-31T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:10 crc kubenswrapper[4730]: E0131 16:31:10.879197 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:10Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.883711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.883765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.883788 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.883852 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.883875 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:10Z","lastTransitionTime":"2026-01-31T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:10 crc kubenswrapper[4730]: E0131 16:31:10.905469 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:10Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.913142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.913205 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.913231 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.913258 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.913280 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:10Z","lastTransitionTime":"2026-01-31T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:10 crc kubenswrapper[4730]: E0131 16:31:10.934464 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:10Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.939531 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.939759 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.939908 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.940050 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.940191 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:10Z","lastTransitionTime":"2026-01-31T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:10 crc kubenswrapper[4730]: E0131 16:31:10.962023 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:10Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:10 crc kubenswrapper[4730]: E0131 16:31:10.962264 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.964681 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.964796 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.964869 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.964942 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:10 crc kubenswrapper[4730]: I0131 16:31:10.964968 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:10Z","lastTransitionTime":"2026-01-31T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.068240 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.068326 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.068345 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.068375 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.068393 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:11Z","lastTransitionTime":"2026-01-31T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.171975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.172038 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.172053 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.172077 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.172093 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:11Z","lastTransitionTime":"2026-01-31T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.275873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.275938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.275955 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.275978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.275995 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:11Z","lastTransitionTime":"2026-01-31T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.380039 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.380108 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.380124 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.380150 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.380172 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:11Z","lastTransitionTime":"2026-01-31T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.433689 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:59:37.86174514 +0000 UTC Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.464060 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:11 crc kubenswrapper[4730]: E0131 16:31:11.464270 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.464287 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:11 crc kubenswrapper[4730]: E0131 16:31:11.464517 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.483524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.483581 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.483598 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.483621 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.483638 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:11Z","lastTransitionTime":"2026-01-31T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.586376 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.586441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.586462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.586490 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.586512 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:11Z","lastTransitionTime":"2026-01-31T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.689589 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.689662 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.689694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.689714 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.689727 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:11Z","lastTransitionTime":"2026-01-31T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.793291 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.793374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.793393 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.793417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.793463 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:11Z","lastTransitionTime":"2026-01-31T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.896777 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.896870 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.896889 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.896912 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.896929 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:11Z","lastTransitionTime":"2026-01-31T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.999940 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:11.999995 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:11 crc kubenswrapper[4730]: I0131 16:31:12.000012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.000039 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.000062 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:12Z","lastTransitionTime":"2026-01-31T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.103224 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.103323 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.103384 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.103410 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.103465 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:12Z","lastTransitionTime":"2026-01-31T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.206044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.206114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.206133 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.206157 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.206226 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:12Z","lastTransitionTime":"2026-01-31T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.309318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.309380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.309396 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.309419 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.309437 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:12Z","lastTransitionTime":"2026-01-31T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.412641 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.412680 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.412697 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.412719 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.412737 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:12Z","lastTransitionTime":"2026-01-31T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.433947 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 14:08:38.580592822 +0000 UTC Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.463739 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.463765 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:12 crc kubenswrapper[4730]: E0131 16:31:12.463919 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:12 crc kubenswrapper[4730]: E0131 16:31:12.464020 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.516113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.516147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.516156 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.516167 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.516177 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:12Z","lastTransitionTime":"2026-01-31T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.619686 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.619794 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.619857 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.619887 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.619909 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:12Z","lastTransitionTime":"2026-01-31T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.723416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.723470 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.723487 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.723509 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.723529 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:12Z","lastTransitionTime":"2026-01-31T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.826866 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.826915 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.826937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.826966 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.826988 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:12Z","lastTransitionTime":"2026-01-31T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.929903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.929974 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.929991 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.930014 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:12 crc kubenswrapper[4730]: I0131 16:31:12.930034 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:12Z","lastTransitionTime":"2026-01-31T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.032538 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.032605 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.032628 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.032681 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.032705 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:13Z","lastTransitionTime":"2026-01-31T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.135403 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.135447 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.135458 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.135476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.135489 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:13Z","lastTransitionTime":"2026-01-31T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.238457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.238496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.238509 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.238527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.238542 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:13Z","lastTransitionTime":"2026-01-31T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.340734 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.340794 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.340848 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.340879 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.340902 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:13Z","lastTransitionTime":"2026-01-31T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.434114 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:31:19.121322348 +0000 UTC Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.444756 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.444852 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.444879 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.444909 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:13 crc kubenswrapper[4730]: I0131 16:31:13.444933 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:13Z","lastTransitionTime":"2026-01-31T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.055402 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:14 crc kubenswrapper[4730]: E0131 16:31:14.055613 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.055993 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.056099 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:14 crc kubenswrapper[4730]: E0131 16:31:14.056237 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.056623 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:14 crc kubenswrapper[4730]: E0131 16:31:14.056779 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:14 crc kubenswrapper[4730]: E0131 16:31:14.057257 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.069165 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.069374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.069536 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.069743 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.069947 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:14Z","lastTransitionTime":"2026-01-31T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.172974 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.173035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.173052 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.173076 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.173099 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:14Z","lastTransitionTime":"2026-01-31T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.276353 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.276416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.276434 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.276457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.276475 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:14Z","lastTransitionTime":"2026-01-31T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.378903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.378966 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.378983 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.379006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.379030 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:14Z","lastTransitionTime":"2026-01-31T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.434603 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:53:41.393349937 +0000 UTC Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.483329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.483396 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.483418 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.483450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.483468 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:14Z","lastTransitionTime":"2026-01-31T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.488194 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:14Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.507582 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:14Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.529014 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:14Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.550177 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:14Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.575817 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:14Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.586333 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.586385 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.586405 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.586433 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.586455 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:14Z","lastTransitionTime":"2026-01-31T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.591562 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:14Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.609355 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:14Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.628891 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:14Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.643952 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:14Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.659490 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:14Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.678544 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:14Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.688750 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.688791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.688840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.688864 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.688881 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:14Z","lastTransitionTime":"2026-01-31T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.710938 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:03Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.339877 6286 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:31:03.339965 6286 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:31:03.339880 6286 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.340366 6286 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.340768 6286 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.346151 6286 factory.go:656] Stopping watch factory\\\\nI0131 16:31:03.428737 6286 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:31:03.428776 6286 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:31:03.428925 6286 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:31:03.428968 6286 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:31:03.429081 6286 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:31:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:14Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.733103 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:14Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.753351 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:14Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.773489 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53612900-51fd-4d01-9a6f-bc9a3c252f3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c8d849c5465966f2f594e26b08dfd9894c2f0337bba1e90085896ab8d8c5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71180a847d6310a8c7bc6f33e0d092316b4927684618237542ff99951cc4bb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab70a9385676283881a5e8581eea0d5dc9f7a467b10e66ca34dc25efce6c712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:14Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.791020 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.791406 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.791621 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.791920 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.792231 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:14Z","lastTransitionTime":"2026-01-31T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.797430 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:14Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.815796 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:14Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.895002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.895076 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.895100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.895131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.895152 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:14Z","lastTransitionTime":"2026-01-31T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.998239 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.998280 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.998295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.998315 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:14 crc kubenswrapper[4730]: I0131 16:31:14.998333 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:14Z","lastTransitionTime":"2026-01-31T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.101396 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.101450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.101463 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.101483 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.101498 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:15Z","lastTransitionTime":"2026-01-31T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.210475 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.210518 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.210530 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.210548 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.210563 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:15Z","lastTransitionTime":"2026-01-31T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.312878 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.312940 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.312957 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.312982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.312999 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:15Z","lastTransitionTime":"2026-01-31T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.415457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.415511 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.415527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.415550 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.415571 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:15Z","lastTransitionTime":"2026-01-31T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.435674 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:24:24.136728447 +0000 UTC Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.463457 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.463542 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:15 crc kubenswrapper[4730]: E0131 16:31:15.463636 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.463940 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:15 crc kubenswrapper[4730]: E0131 16:31:15.463923 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:15 crc kubenswrapper[4730]: E0131 16:31:15.464449 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.465008 4730 scope.go:117] "RemoveContainer" containerID="529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92" Jan 31 16:31:15 crc kubenswrapper[4730]: E0131 16:31:15.465611 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.518241 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.518291 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.518308 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.518332 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.518351 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:15Z","lastTransitionTime":"2026-01-31T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.621239 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.621302 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.621318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.621342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.621361 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:15Z","lastTransitionTime":"2026-01-31T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.724243 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.724304 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.724322 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.724347 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.724451 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:15Z","lastTransitionTime":"2026-01-31T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.828191 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.828250 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.828270 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.828401 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.828421 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:15Z","lastTransitionTime":"2026-01-31T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.930860 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.930903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.930913 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.930926 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:15 crc kubenswrapper[4730]: I0131 16:31:15.930937 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:15Z","lastTransitionTime":"2026-01-31T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.034282 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.034319 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.034328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.034341 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.034350 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:16Z","lastTransitionTime":"2026-01-31T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.137467 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.137542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.137560 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.137584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.137602 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:16Z","lastTransitionTime":"2026-01-31T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.241496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.241597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.241625 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.241696 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.241720 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:16Z","lastTransitionTime":"2026-01-31T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.346127 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.346218 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.346266 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.346291 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.346344 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:16Z","lastTransitionTime":"2026-01-31T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.436658 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:39:15.791702227 +0000 UTC Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.450403 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.450468 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.450486 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.450510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.450528 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:16Z","lastTransitionTime":"2026-01-31T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.463762 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:16 crc kubenswrapper[4730]: E0131 16:31:16.463964 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.553197 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.553255 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.553263 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.553277 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.553287 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:16Z","lastTransitionTime":"2026-01-31T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.655550 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.655588 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.655597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.655613 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.655623 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:16Z","lastTransitionTime":"2026-01-31T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.758277 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.758614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.758630 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.758647 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.758659 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:16Z","lastTransitionTime":"2026-01-31T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.860477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.860516 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.860530 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.860550 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.860565 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:16Z","lastTransitionTime":"2026-01-31T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.962894 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.962939 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.962951 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.962969 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:16 crc kubenswrapper[4730]: I0131 16:31:16.962980 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:16Z","lastTransitionTime":"2026-01-31T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.064887 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.064935 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.064946 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.064962 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.064971 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:17Z","lastTransitionTime":"2026-01-31T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.168100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.168138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.168147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.168161 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.168171 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:17Z","lastTransitionTime":"2026-01-31T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.270680 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.270713 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.270724 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.270739 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.270751 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:17Z","lastTransitionTime":"2026-01-31T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.373251 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.373292 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.373307 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.373329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.373344 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:17Z","lastTransitionTime":"2026-01-31T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.437333 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:15:00.513060517 +0000 UTC Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.463857 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:17 crc kubenswrapper[4730]: E0131 16:31:17.463973 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.464133 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:17 crc kubenswrapper[4730]: E0131 16:31:17.464175 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.464267 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:17 crc kubenswrapper[4730]: E0131 16:31:17.464309 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.475578 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.475627 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.475647 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.475671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.475692 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:17Z","lastTransitionTime":"2026-01-31T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.577988 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.578021 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.578031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.578047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.578058 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:17Z","lastTransitionTime":"2026-01-31T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.679855 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.679883 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.679891 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.679903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.679911 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:17Z","lastTransitionTime":"2026-01-31T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.783233 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.783277 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.783303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.783325 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.783336 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:17Z","lastTransitionTime":"2026-01-31T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.886250 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.886300 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.886313 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.886334 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.886347 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:17Z","lastTransitionTime":"2026-01-31T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.988556 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.988595 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.988607 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.988622 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:17 crc kubenswrapper[4730]: I0131 16:31:17.988635 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:17Z","lastTransitionTime":"2026-01-31T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.090773 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.090828 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.090840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.090855 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.090867 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:18Z","lastTransitionTime":"2026-01-31T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.193788 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.193861 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.193870 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.193884 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.193893 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:18Z","lastTransitionTime":"2026-01-31T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.295518 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.295574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.295589 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.295606 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.295617 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:18Z","lastTransitionTime":"2026-01-31T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.398149 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.398217 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.398236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.398665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.398721 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:18Z","lastTransitionTime":"2026-01-31T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.437860 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 14:09:54.122013823 +0000 UTC Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.463258 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:18 crc kubenswrapper[4730]: E0131 16:31:18.463433 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.501966 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.502011 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.502023 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.502039 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.502052 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:18Z","lastTransitionTime":"2026-01-31T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.604443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.604520 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.604545 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.604572 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.604588 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:18Z","lastTransitionTime":"2026-01-31T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.706744 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.706851 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.706870 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.706895 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.706914 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:18Z","lastTransitionTime":"2026-01-31T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.810244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.810296 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.810314 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.810338 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.810356 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:18Z","lastTransitionTime":"2026-01-31T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.912818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.912867 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.912877 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.912889 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:18 crc kubenswrapper[4730]: I0131 16:31:18.912916 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:18Z","lastTransitionTime":"2026-01-31T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.015156 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.015191 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.015202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.015217 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.015226 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:19Z","lastTransitionTime":"2026-01-31T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.117123 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.117210 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.117236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.117339 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.117368 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:19Z","lastTransitionTime":"2026-01-31T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.219034 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.219106 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.219127 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.219152 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.219171 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:19Z","lastTransitionTime":"2026-01-31T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.321569 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.321615 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.321632 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.321656 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.321673 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:19Z","lastTransitionTime":"2026-01-31T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.424264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.424318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.424335 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.424357 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.424372 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:19Z","lastTransitionTime":"2026-01-31T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.438750 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 17:40:24.069176579 +0000 UTC Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.463293 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.463335 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.463366 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:19 crc kubenswrapper[4730]: E0131 16:31:19.463477 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:19 crc kubenswrapper[4730]: E0131 16:31:19.463574 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:19 crc kubenswrapper[4730]: E0131 16:31:19.463697 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.526449 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.526494 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.526505 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.526523 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.526553 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:19Z","lastTransitionTime":"2026-01-31T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.629062 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.629116 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.629133 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.629158 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.629175 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:19Z","lastTransitionTime":"2026-01-31T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.731451 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.731497 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.731509 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.731526 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.731537 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:19Z","lastTransitionTime":"2026-01-31T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.834191 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.834228 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.834236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.834249 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.834258 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:19Z","lastTransitionTime":"2026-01-31T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.936628 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.936664 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.936674 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.936689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:19 crc kubenswrapper[4730]: I0131 16:31:19.936699 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:19Z","lastTransitionTime":"2026-01-31T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.043432 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.043471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.043482 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.043516 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.043526 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:20Z","lastTransitionTime":"2026-01-31T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.146260 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.146518 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.146590 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.146676 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.146754 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:20Z","lastTransitionTime":"2026-01-31T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.248851 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.248906 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.248925 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.248949 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.248966 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:20Z","lastTransitionTime":"2026-01-31T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.351163 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.351207 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.351221 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.351237 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.351251 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:20Z","lastTransitionTime":"2026-01-31T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.439189 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 21:46:38.234656275 +0000 UTC Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.453586 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.453610 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.453618 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.453630 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.453639 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:20Z","lastTransitionTime":"2026-01-31T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.463987 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:20 crc kubenswrapper[4730]: E0131 16:31:20.464080 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.554992 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.555018 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.555026 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.555037 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.555046 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:20Z","lastTransitionTime":"2026-01-31T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.657521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.657560 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.657571 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.657588 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.657603 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:20Z","lastTransitionTime":"2026-01-31T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.760414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.760938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.761200 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.761419 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.761690 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:20Z","lastTransitionTime":"2026-01-31T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.864305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.864327 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.864337 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.864349 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.864358 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:20Z","lastTransitionTime":"2026-01-31T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.967282 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.967621 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.967712 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.967818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:20 crc kubenswrapper[4730]: I0131 16:31:20.967909 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:20Z","lastTransitionTime":"2026-01-31T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.071092 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.071331 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.071396 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.071470 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.071542 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:21Z","lastTransitionTime":"2026-01-31T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.122342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.122404 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.122422 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.122444 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.122461 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:21Z","lastTransitionTime":"2026-01-31T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:21 crc kubenswrapper[4730]: E0131 16:31:21.142025 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:21Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.145553 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.145878 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.145998 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.146083 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.146143 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:21Z","lastTransitionTime":"2026-01-31T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:21 crc kubenswrapper[4730]: E0131 16:31:21.160482 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:21Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.164737 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.164872 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.164894 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.164952 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.164972 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:21Z","lastTransitionTime":"2026-01-31T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:21 crc kubenswrapper[4730]: E0131 16:31:21.183634 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:21Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.187657 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.187746 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.187795 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.187869 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.187888 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:21Z","lastTransitionTime":"2026-01-31T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:21 crc kubenswrapper[4730]: E0131 16:31:21.200753 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:21Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.206280 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.206379 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.206397 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.206424 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.206442 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:21Z","lastTransitionTime":"2026-01-31T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:21 crc kubenswrapper[4730]: E0131 16:31:21.219170 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:21Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:21 crc kubenswrapper[4730]: E0131 16:31:21.219397 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.221575 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.221637 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.221650 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.221667 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.221681 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:21Z","lastTransitionTime":"2026-01-31T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.323933 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.323986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.323997 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.324013 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.324026 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:21Z","lastTransitionTime":"2026-01-31T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.427501 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.427538 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.427546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.427560 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.427569 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:21Z","lastTransitionTime":"2026-01-31T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.439949 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 23:44:45.541448466 +0000 UTC Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.463190 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.463203 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.463299 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:21 crc kubenswrapper[4730]: E0131 16:31:21.463436 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:21 crc kubenswrapper[4730]: E0131 16:31:21.463677 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:21 crc kubenswrapper[4730]: E0131 16:31:21.463735 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.529659 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.529688 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.529699 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.529714 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.529725 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:21Z","lastTransitionTime":"2026-01-31T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.637103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.637157 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.637167 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.637182 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.637192 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:21Z","lastTransitionTime":"2026-01-31T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.738197 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs\") pod \"network-metrics-daemon-sg8lw\" (UID: \"39ef74a4-f27d-498b-8bbd-aae64590d030\") " pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:21 crc kubenswrapper[4730]: E0131 16:31:21.738356 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 16:31:21 crc kubenswrapper[4730]: E0131 16:31:21.738412 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs podName:39ef74a4-f27d-498b-8bbd-aae64590d030 nodeName:}" failed. No retries permitted until 2026-01-31 16:31:53.738394311 +0000 UTC m=+100.544451237 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs") pod "network-metrics-daemon-sg8lw" (UID: "39ef74a4-f27d-498b-8bbd-aae64590d030") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.741616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.741670 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.741682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.741698 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.742124 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:21Z","lastTransitionTime":"2026-01-31T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.844451 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.844497 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.844507 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.844520 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.844528 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:21Z","lastTransitionTime":"2026-01-31T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.946934 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.946982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.947000 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.947021 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:21 crc kubenswrapper[4730]: I0131 16:31:21.947037 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:21Z","lastTransitionTime":"2026-01-31T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.048987 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.049026 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.049035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.049048 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.049059 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:22Z","lastTransitionTime":"2026-01-31T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.150959 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.150999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.151010 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.151026 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.151035 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:22Z","lastTransitionTime":"2026-01-31T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.253302 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.253341 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.253355 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.253373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.253385 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:22Z","lastTransitionTime":"2026-01-31T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.355477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.355527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.355540 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.355553 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.355563 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:22Z","lastTransitionTime":"2026-01-31T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.440048 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:39:08.743914982 +0000 UTC Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.457931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.457967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.457980 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.457996 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.458006 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:22Z","lastTransitionTime":"2026-01-31T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.463329 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:22 crc kubenswrapper[4730]: E0131 16:31:22.463434 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.560220 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.560257 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.560265 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.560278 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.560287 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:22Z","lastTransitionTime":"2026-01-31T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.662222 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.662631 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.662772 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.662963 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.663109 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:22Z","lastTransitionTime":"2026-01-31T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.765635 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.765683 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.765691 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.765703 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.765712 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:22Z","lastTransitionTime":"2026-01-31T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.869605 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.869648 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.869657 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.869673 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.869684 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:22Z","lastTransitionTime":"2026-01-31T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.971702 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.971954 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.972031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.972105 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:22 crc kubenswrapper[4730]: I0131 16:31:22.972167 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:22Z","lastTransitionTime":"2026-01-31T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.073937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.074153 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.074219 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.074280 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.074336 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:23Z","lastTransitionTime":"2026-01-31T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.176378 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.176421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.176435 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.176452 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.176465 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:23Z","lastTransitionTime":"2026-01-31T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.278236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.278309 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.278328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.278353 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.278373 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:23Z","lastTransitionTime":"2026-01-31T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.380357 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.380386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.380395 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.380410 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.380419 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:23Z","lastTransitionTime":"2026-01-31T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.440886 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 21:04:09.817990465 +0000 UTC Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.463515 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:23 crc kubenswrapper[4730]: E0131 16:31:23.463624 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.463531 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:23 crc kubenswrapper[4730]: E0131 16:31:23.463691 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.463513 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:23 crc kubenswrapper[4730]: E0131 16:31:23.463738 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.482597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.482620 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.482628 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.482640 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.482650 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:23Z","lastTransitionTime":"2026-01-31T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.584736 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.584779 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.584790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.584832 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.584844 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:23Z","lastTransitionTime":"2026-01-31T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.686684 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.686729 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.686740 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.686755 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.686766 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:23Z","lastTransitionTime":"2026-01-31T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.788836 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.788874 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.788883 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.788899 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.788908 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:23Z","lastTransitionTime":"2026-01-31T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.892354 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.892404 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.892414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.892427 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.892436 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:23Z","lastTransitionTime":"2026-01-31T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.994610 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.994652 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.994662 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.994676 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:23 crc kubenswrapper[4730]: I0131 16:31:23.994684 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:23Z","lastTransitionTime":"2026-01-31T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.098501 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c8lpn_2d1c5cbc-307d-4556-b162-2c5c0103662d/kube-multus/0.log" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.098545 4730 generic.go:334] "Generic (PLEG): container finished" podID="2d1c5cbc-307d-4556-b162-2c5c0103662d" containerID="b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a" exitCode=1 Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.098578 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c8lpn" event={"ID":"2d1c5cbc-307d-4556-b162-2c5c0103662d","Type":"ContainerDied","Data":"b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a"} Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.098983 4730 scope.go:117] "RemoveContainer" containerID="b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.102240 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.102283 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.102296 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.102337 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.102351 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:24Z","lastTransitionTime":"2026-01-31T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.122034 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.133705 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.144408 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53612900-51fd-4d01-9a6f-bc9a3c252f3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c8d849c5465966f2f594e26b08dfd9894c2f0337bba1e90085896ab8d8c5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71180a847d6310a8c7bc6f33e0d092316b4927684618237542ff99951cc4bb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab70a9385676283881a5e8581eea0d5dc9f7a467b10e66ca34dc25efce6c712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.156412 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.170849 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.179480 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.188133 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.199046 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.204572 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.204597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.204604 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.204618 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.204632 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:24Z","lastTransitionTime":"2026-01-31T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.211006 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.224714 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.235288 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.247563 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.259278 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:23Z\\\",\\\"message\\\":\\\"2026-01-31T16:30:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591\\\\n2026-01-31T16:30:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591 to /host/opt/cni/bin/\\\\n2026-01-31T16:30:38Z [verbose] multus-daemon started\\\\n2026-01-31T16:30:38Z [verbose] Readiness Indicator file check\\\\n2026-01-31T16:31:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.269105 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.279159 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.290510 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.307520 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.307570 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.307583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.307600 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.307613 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:24Z","lastTransitionTime":"2026-01-31T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.307927 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:03Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.339877 6286 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:31:03.339965 6286 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:31:03.339880 6286 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.340366 6286 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.340768 6286 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.346151 6286 factory.go:656] Stopping watch factory\\\\nI0131 16:31:03.428737 6286 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:31:03.428776 6286 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:31:03.428925 6286 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:31:03.428968 6286 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:31:03.429081 6286 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:31:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.410997 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.411042 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.411050 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.411063 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.411074 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:24Z","lastTransitionTime":"2026-01-31T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.441240 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:19:56.107501006 +0000 UTC Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.463336 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:24 crc kubenswrapper[4730]: E0131 16:31:24.463459 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.471633 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.480649 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.491177 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:23Z\\\",\\\"message\\\":\\\"2026-01-31T16:30:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591\\\\n2026-01-31T16:30:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591 to /host/opt/cni/bin/\\\\n2026-01-31T16:30:38Z [verbose] multus-daemon started\\\\n2026-01-31T16:30:38Z [verbose] Readiness Indicator file check\\\\n2026-01-31T16:31:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.506544 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.514321 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.514415 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.514440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.514502 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.514522 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:24Z","lastTransitionTime":"2026-01-31T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.527045 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:03Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.339877 6286 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:31:03.339965 6286 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:31:03.339880 6286 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.340366 6286 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.340768 6286 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.346151 6286 factory.go:656] Stopping watch factory\\\\nI0131 16:31:03.428737 6286 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:31:03.428776 6286 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:31:03.428925 6286 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:31:03.428968 6286 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:31:03.429081 6286 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:31:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.539218 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.549216 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53612900-51fd-4d01-9a6f-bc9a3c252f3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c8d849c5465966f2f594e26b08dfd9894c2f0337bba1e90085896ab8d8c5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71180a847d6310a8c7bc6f33e0d092316b4927684618237542ff99951cc4bb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab70a9385676283881a5e8581eea0d5dc9f7a467b10e66ca34dc25efce6c712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.559028 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.570053 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.582073 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.594159 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.605708 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.616958 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.616991 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.617002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.617023 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.617032 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:24Z","lastTransitionTime":"2026-01-31T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.617304 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.627055 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.641430 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.650481 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.660702 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:24Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.719179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.719232 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.719244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.719258 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.719266 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:24Z","lastTransitionTime":"2026-01-31T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.821193 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.821233 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.821243 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.821256 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.821265 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:24Z","lastTransitionTime":"2026-01-31T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.922812 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.922848 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.922858 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.922871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:24 crc kubenswrapper[4730]: I0131 16:31:24.922881 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:24Z","lastTransitionTime":"2026-01-31T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.024592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.024615 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.024623 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.024635 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.024642 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:25Z","lastTransitionTime":"2026-01-31T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.102012 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c8lpn_2d1c5cbc-307d-4556-b162-2c5c0103662d/kube-multus/0.log" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.102054 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c8lpn" event={"ID":"2d1c5cbc-307d-4556-b162-2c5c0103662d","Type":"ContainerStarted","Data":"628a414aa58b365a660f8745dbacd5fa0ecb2f761e87cb4f6bf2c1b57cfef0f0"} Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.111893 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:25Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.120964 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:25Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.126968 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.126993 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.127002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.127012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.127021 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:25Z","lastTransitionTime":"2026-01-31T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.132723 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://628a414aa58b365a660f8745dbacd5fa0ecb2f761e87cb4f6bf2c1b57cfef0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:23Z\\\",\\\"message\\\":\\\"2026-01-31T16:30:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591\\\\n2026-01-31T16:30:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591 to /host/opt/cni/bin/\\\\n2026-01-31T16:30:38Z [verbose] multus-daemon started\\\\n2026-01-31T16:30:38Z [verbose] Readiness Indicator file check\\\\n2026-01-31T16:31:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:31:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:25Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.142466 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:25Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.161887 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:03Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.339877 6286 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:31:03.339965 6286 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:31:03.339880 6286 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.340366 6286 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.340768 6286 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.346151 6286 factory.go:656] Stopping watch factory\\\\nI0131 16:31:03.428737 6286 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:31:03.428776 6286 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:31:03.428925 6286 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:31:03.428968 6286 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:31:03.429081 6286 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:31:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:25Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.173232 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:25Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.183360 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53612900-51fd-4d01-9a6f-bc9a3c252f3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c8d849c5465966f2f594e26b08dfd9894c2f0337bba1e90085896ab8d8c5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71180a847d6310a8c7bc6f33e0d092316b4927684618237542ff99951cc4bb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab70a9385676283881a5e8581eea0d5dc9f7a467b10e66ca34dc25efce6c712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:25Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.194128 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:25Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.202357 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:25Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.215545 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:25Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.225984 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:25Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.228694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.228731 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.228741 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.228758 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.228768 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:25Z","lastTransitionTime":"2026-01-31T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.235255 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:25Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.244674 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:25Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.256199 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:25Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.267598 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:25Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.275929 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:25Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.285105 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:25Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.330753 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.330848 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.330859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.330873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.330883 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:25Z","lastTransitionTime":"2026-01-31T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.432852 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.432873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.432881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.432892 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.432900 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:25Z","lastTransitionTime":"2026-01-31T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.442375 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:53:48.924327331 +0000 UTC Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.485098 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.485156 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:25 crc kubenswrapper[4730]: E0131 16:31:25.485181 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:25 crc kubenswrapper[4730]: E0131 16:31:25.485300 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.485480 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:25 crc kubenswrapper[4730]: E0131 16:31:25.485570 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.535137 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.535244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.535283 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.535303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.535320 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:25Z","lastTransitionTime":"2026-01-31T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.637644 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.637690 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.637706 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.637725 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.637741 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:25Z","lastTransitionTime":"2026-01-31T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.740503 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.740548 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.740559 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.740575 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.740586 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:25Z","lastTransitionTime":"2026-01-31T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.842333 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.842374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.842384 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.842397 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.842407 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:25Z","lastTransitionTime":"2026-01-31T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.944790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.944930 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.944949 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.944974 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:25 crc kubenswrapper[4730]: I0131 16:31:25.944993 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:25Z","lastTransitionTime":"2026-01-31T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.047292 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.047329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.047341 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.047356 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.047367 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:26Z","lastTransitionTime":"2026-01-31T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.149270 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.149303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.149315 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.149329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.149341 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:26Z","lastTransitionTime":"2026-01-31T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.252025 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.252062 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.252073 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.252088 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.252100 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:26Z","lastTransitionTime":"2026-01-31T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.353673 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.353724 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.353741 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.353763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.353779 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:26Z","lastTransitionTime":"2026-01-31T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.443477 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:55:44.219378646 +0000 UTC Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.455949 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.456055 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.456080 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.456104 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.456122 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:26Z","lastTransitionTime":"2026-01-31T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.463295 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:26 crc kubenswrapper[4730]: E0131 16:31:26.463464 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.559637 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.559679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.559716 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.559759 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.559774 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:26Z","lastTransitionTime":"2026-01-31T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.662934 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.662998 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.663027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.663057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.663079 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:26Z","lastTransitionTime":"2026-01-31T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.765002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.765040 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.765052 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.765067 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.765079 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:26Z","lastTransitionTime":"2026-01-31T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.867662 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.867693 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.867701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.867713 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.867724 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:26Z","lastTransitionTime":"2026-01-31T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.970475 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.970516 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.970526 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.970538 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:26 crc kubenswrapper[4730]: I0131 16:31:26.970545 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:26Z","lastTransitionTime":"2026-01-31T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.073071 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.073127 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.073142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.073163 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.073176 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:27Z","lastTransitionTime":"2026-01-31T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.175945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.175996 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.176008 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.176026 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.176041 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:27Z","lastTransitionTime":"2026-01-31T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.277856 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.277898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.277907 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.277920 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.277932 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:27Z","lastTransitionTime":"2026-01-31T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.380226 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.380268 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.380278 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.380294 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.380304 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:27Z","lastTransitionTime":"2026-01-31T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.443996 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 22:36:03.623808445 +0000 UTC Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.463331 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:27 crc kubenswrapper[4730]: E0131 16:31:27.463460 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.463533 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.463617 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:27 crc kubenswrapper[4730]: E0131 16:31:27.463681 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:27 crc kubenswrapper[4730]: E0131 16:31:27.463793 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.482568 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.482664 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.482724 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.482759 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.482840 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:27Z","lastTransitionTime":"2026-01-31T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.584300 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.584328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.584337 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.584349 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.584358 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:27Z","lastTransitionTime":"2026-01-31T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.686992 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.687026 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.687037 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.687053 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.687062 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:27Z","lastTransitionTime":"2026-01-31T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.789316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.789356 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.789367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.789384 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.789396 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:27Z","lastTransitionTime":"2026-01-31T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.891488 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.891555 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.891567 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.891580 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.891589 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:27Z","lastTransitionTime":"2026-01-31T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.994170 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.994229 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.994239 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.994253 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:27 crc kubenswrapper[4730]: I0131 16:31:27.994263 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:27Z","lastTransitionTime":"2026-01-31T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.096054 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.096101 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.096113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.096131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.096143 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:28Z","lastTransitionTime":"2026-01-31T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.198419 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.198451 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.198463 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.198478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.198490 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:28Z","lastTransitionTime":"2026-01-31T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.300318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.300356 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.300368 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.300385 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.300397 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:28Z","lastTransitionTime":"2026-01-31T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.402691 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.402728 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.402740 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.402754 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.402763 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:28Z","lastTransitionTime":"2026-01-31T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.444863 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 21:34:46.055544653 +0000 UTC Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.463997 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:28 crc kubenswrapper[4730]: E0131 16:31:28.464149 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.505025 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.505088 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.505112 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.505139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.505160 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:28Z","lastTransitionTime":"2026-01-31T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.607231 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.607264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.607274 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.607288 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.607297 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:28Z","lastTransitionTime":"2026-01-31T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.710365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.710420 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.710443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.710472 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.710494 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:28Z","lastTransitionTime":"2026-01-31T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.813386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.813442 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.813460 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.813483 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.813500 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:28Z","lastTransitionTime":"2026-01-31T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.915003 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.915032 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.915040 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.915053 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:28 crc kubenswrapper[4730]: I0131 16:31:28.915062 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:28Z","lastTransitionTime":"2026-01-31T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.017005 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.017093 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.017121 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.017153 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.017180 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:29Z","lastTransitionTime":"2026-01-31T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.119938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.120121 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.120140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.120163 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.120180 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:29Z","lastTransitionTime":"2026-01-31T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.221904 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.221962 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.221980 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.222004 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.222021 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:29Z","lastTransitionTime":"2026-01-31T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.325665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.325702 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.325713 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.325729 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.325741 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:29Z","lastTransitionTime":"2026-01-31T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.428032 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.428082 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.428099 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.428120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.428137 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:29Z","lastTransitionTime":"2026-01-31T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.445587 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:28:11.83109191 +0000 UTC Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.463904 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.463938 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.464007 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:29 crc kubenswrapper[4730]: E0131 16:31:29.464012 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:29 crc kubenswrapper[4730]: E0131 16:31:29.464272 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:29 crc kubenswrapper[4730]: E0131 16:31:29.464342 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.464839 4730 scope.go:117] "RemoveContainer" containerID="529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.531341 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.531384 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.531399 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.531418 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.531434 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:29Z","lastTransitionTime":"2026-01-31T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.634994 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.635068 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.635092 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.635119 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.635143 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:29Z","lastTransitionTime":"2026-01-31T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.738039 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.738090 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.738107 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.738131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.738151 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:29Z","lastTransitionTime":"2026-01-31T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.842612 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.842682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.842699 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.842736 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.842757 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:29Z","lastTransitionTime":"2026-01-31T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.945080 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.945131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.945143 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.945161 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:29 crc kubenswrapper[4730]: I0131 16:31:29.945175 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:29Z","lastTransitionTime":"2026-01-31T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.047260 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.047345 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.047358 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.047373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.047384 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:30Z","lastTransitionTime":"2026-01-31T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.125653 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovnkube-controller/2.log" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.128135 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerStarted","Data":"8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731"} Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.129327 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.144969 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.148884 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.148915 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.148927 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.148945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.148956 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:30Z","lastTransitionTime":"2026-01-31T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.158188 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.175427 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.188736 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.202834 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.214522 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.227580 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.241885 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://628a414aa58b365a660f8745dbacd5fa0ecb2f761e87cb4f6bf2c1b57cfef0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:23Z\\\",\\\"message\\\":\\\"2026-01-31T16:30:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591\\\\n2026-01-31T16:30:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591 to /host/opt/cni/bin/\\\\n2026-01-31T16:30:38Z [verbose] multus-daemon started\\\\n2026-01-31T16:30:38Z [verbose] Readiness Indicator file check\\\\n2026-01-31T16:31:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:31:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.252191 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.258387 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.258422 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.258436 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.258453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.258466 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:30Z","lastTransitionTime":"2026-01-31T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.263824 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.277374 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.293982 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:03Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.339877 6286 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:31:03.339965 6286 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:31:03.339880 6286 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.340366 6286 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.340768 6286 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.346151 6286 factory.go:656] Stopping watch factory\\\\nI0131 16:31:03.428737 6286 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:31:03.428776 6286 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:31:03.428925 6286 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:31:03.428968 6286 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:31:03.429081 6286 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:31:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:31:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.305999 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.320894 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.334839 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53612900-51fd-4d01-9a6f-bc9a3c252f3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c8d849c5465966f2f594e26b08dfd9894c2f0337bba1e90085896ab8d8c5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71180a847d6310a8c7bc6f33e0d092316b4927684618237542ff99951cc4bb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab70a9385676283881a5e8581eea0d5dc9f7a467b10e66ca34dc25efce6c712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.348353 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.360207 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.360264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.360275 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.360289 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.360297 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:30Z","lastTransitionTime":"2026-01-31T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.361347 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.446685 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 03:51:10.502324471 +0000 UTC Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.462380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.462421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.462432 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.462449 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.462460 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:30Z","lastTransitionTime":"2026-01-31T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.463227 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:30 crc kubenswrapper[4730]: E0131 16:31:30.463405 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.565350 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.565390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.565401 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.565420 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.565432 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:30Z","lastTransitionTime":"2026-01-31T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.668918 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.668980 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.668999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.669024 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.669057 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:30Z","lastTransitionTime":"2026-01-31T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.772190 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.772247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.772265 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.772288 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.772308 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:30Z","lastTransitionTime":"2026-01-31T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.874889 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.874952 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.874972 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.874999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.875016 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:30Z","lastTransitionTime":"2026-01-31T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.979913 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.979965 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.979974 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.979991 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:30 crc kubenswrapper[4730]: I0131 16:31:30.980001 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:30Z","lastTransitionTime":"2026-01-31T16:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.082937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.082979 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.082987 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.083005 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.083013 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:31Z","lastTransitionTime":"2026-01-31T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.133671 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovnkube-controller/3.log" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.134655 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovnkube-controller/2.log" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.138517 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerID="8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731" exitCode=1 Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.138570 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerDied","Data":"8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731"} Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.138608 4730 scope.go:117] "RemoveContainer" containerID="529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.139677 4730 scope.go:117] "RemoveContainer" containerID="8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731" Jan 31 16:31:31 crc kubenswrapper[4730]: E0131 16:31:31.140007 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.158937 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.176360 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.185792 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.186030 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.186306 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.186481 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.186577 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:31Z","lastTransitionTime":"2026-01-31T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.190129 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.204780 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53612900-51fd-4d01-9a6f-bc9a3c252f3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c8d849c5465966f2f594e26b08dfd9894c2f0337bba1e90085896ab8d8c5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71180a847d6310a8c7bc6f33e0d092316b4927684618237542ff99951cc4bb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab70a9385676283881a5e8581eea0d5dc9f7a467b10e66ca34dc25efce6c712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.223561 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.239494 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.261224 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.273697 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.287862 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.291347 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.291448 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.291468 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.291851 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.292060 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:31Z","lastTransitionTime":"2026-01-31T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.303638 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.321601 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.341326 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.361315 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://628a414aa58b365a660f8745dbacd5fa0ecb2f761e87cb4f6bf2c1b57cfef0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:23Z\\\",\\\"message\\\":\\\"2026-01-31T16:30:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591\\\\n2026-01-31T16:30:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591 to /host/opt/cni/bin/\\\\n2026-01-31T16:30:38Z [verbose] multus-daemon started\\\\n2026-01-31T16:30:38Z [verbose] Readiness Indicator file check\\\\n2026-01-31T16:31:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:31:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.377745 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.391329 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.394958 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.394996 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.395012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.395035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.395052 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:31Z","lastTransitionTime":"2026-01-31T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.411730 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.442489 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://529a4d016f3eb87900cb714f6c17226dc07dfe04115be1095bb1d19902901d92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:03Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.339877 6286 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 16:31:03.339965 6286 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 16:31:03.339880 6286 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 16:31:03.340366 6286 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.340768 6286 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 16:31:03.346151 6286 factory.go:656] Stopping watch factory\\\\nI0131 16:31:03.428737 6286 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0131 16:31:03.428776 6286 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0131 16:31:03.428925 6286 ovnkube.go:599] Stopped ovnkube\\\\nI0131 16:31:03.428968 6286 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 16:31:03.429081 6286 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:31:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:30Z\\\",\\\"message\\\":\\\"59 services_controller.go:445] Built service openshift-route-controller-manager/route-controller-manager LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 16:31:30.421591 6659 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z]\\\\nI0131 16:31:30.421599 6659 services_controller.go:451] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:31:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.447485 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 07:48:41.690144605 +0000 UTC Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.464079 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.464205 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.464092 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:31 crc kubenswrapper[4730]: E0131 16:31:31.464243 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:31 crc kubenswrapper[4730]: E0131 16:31:31.464573 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:31 crc kubenswrapper[4730]: E0131 16:31:31.464709 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.498182 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.498238 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.498256 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.498279 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.498299 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:31Z","lastTransitionTime":"2026-01-31T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.501554 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.501624 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.501643 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.501666 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.501684 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:31Z","lastTransitionTime":"2026-01-31T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:31 crc kubenswrapper[4730]: E0131 16:31:31.521036 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.526026 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.526120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.526143 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.526166 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.526184 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:31Z","lastTransitionTime":"2026-01-31T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:31 crc kubenswrapper[4730]: E0131 16:31:31.541987 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.546110 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.546174 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.546199 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.546225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.546243 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:31Z","lastTransitionTime":"2026-01-31T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:31 crc kubenswrapper[4730]: E0131 16:31:31.565200 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.569668 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.569706 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.569717 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.569732 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.569744 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:31Z","lastTransitionTime":"2026-01-31T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:31 crc kubenswrapper[4730]: E0131 16:31:31.587727 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.591560 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.591618 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.591635 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.591684 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.591701 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:31Z","lastTransitionTime":"2026-01-31T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:31 crc kubenswrapper[4730]: E0131 16:31:31.612976 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:31Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:31 crc kubenswrapper[4730]: E0131 16:31:31.613201 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.615367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.615416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.615428 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.615442 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.615453 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:31Z","lastTransitionTime":"2026-01-31T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.718696 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.718733 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.718743 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.718758 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.718769 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:31Z","lastTransitionTime":"2026-01-31T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.821938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.822161 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.822235 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.822309 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.822415 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:31Z","lastTransitionTime":"2026-01-31T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.924748 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.924948 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.925028 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.925124 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:31 crc kubenswrapper[4730]: I0131 16:31:31.925197 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:31Z","lastTransitionTime":"2026-01-31T16:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.028969 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.029337 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.029502 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.029638 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.029756 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:32Z","lastTransitionTime":"2026-01-31T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.132968 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.133037 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.133055 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.133079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.133096 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:32Z","lastTransitionTime":"2026-01-31T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.143935 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovnkube-controller/3.log" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.148516 4730 scope.go:117] "RemoveContainer" containerID="8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731" Jan 31 16:31:32 crc kubenswrapper[4730]: E0131 16:31:32.148880 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.174593 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:32Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.206946 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:30Z\\\",\\\"message\\\":\\\"59 services_controller.go:445] Built service openshift-route-controller-manager/route-controller-manager LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 16:31:30.421591 6659 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z]\\\\nI0131 16:31:30.421599 6659 services_controller.go:451] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:31:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:32Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.227629 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:32Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.237095 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.237192 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.237250 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.237274 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.237328 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:32Z","lastTransitionTime":"2026-01-31T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.250164 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53612900-51fd-4d01-9a6f-bc9a3c252f3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c8d849c5465966f2f594e26b08dfd9894c2f0337bba1e90085896ab8d8c5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71180a847d6310a8c7bc6f33e0d092316b4927684618237542ff99951cc4bb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab70a9385676283881a5e8581eea0d5dc9f7a467b10e66ca34dc25efce6c712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:32Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.275655 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:32Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.295905 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:32Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.320167 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:32Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.341464 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.341516 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.341536 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.341563 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.341585 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:32Z","lastTransitionTime":"2026-01-31T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.341440 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:32Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.360773 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:32Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.384648 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:32Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.405098 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:32Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.429511 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:32Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.444998 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.445048 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.445067 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.445089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.445106 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:32Z","lastTransitionTime":"2026-01-31T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.448456 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 15:20:04.953346869 +0000 UTC Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.464048 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:32 crc kubenswrapper[4730]: E0131 16:31:32.464640 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.470717 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:32Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.490467 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:32Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.492077 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.506550 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:32Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.521879 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:32Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.535868 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://628a414aa58b365a660f8745dbacd5fa0ecb2f761e87cb4f6bf2c1b57cfef0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:23Z\\\",\\\"message\\\":\\\"2026-01-31T16:30:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591\\\\n2026-01-31T16:30:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591 to /host/opt/cni/bin/\\\\n2026-01-31T16:30:38Z [verbose] multus-daemon started\\\\n2026-01-31T16:30:38Z [verbose] Readiness Indicator file check\\\\n2026-01-31T16:31:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:31:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:32Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.547535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.547574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.547587 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.547605 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.547618 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:32Z","lastTransitionTime":"2026-01-31T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.650018 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.650049 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.650057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.650070 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.650081 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:32Z","lastTransitionTime":"2026-01-31T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.752924 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.752973 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.752988 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.753006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.753018 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:32Z","lastTransitionTime":"2026-01-31T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.855449 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.855477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.855485 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.855498 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.855506 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:32Z","lastTransitionTime":"2026-01-31T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.958260 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.958312 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.958329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.958352 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:32 crc kubenswrapper[4730]: I0131 16:31:32.958371 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:32Z","lastTransitionTime":"2026-01-31T16:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.061160 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.061295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.061318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.061343 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.061361 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:33Z","lastTransitionTime":"2026-01-31T16:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.163910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.163970 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.163989 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.164014 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.164032 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:33Z","lastTransitionTime":"2026-01-31T16:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.266969 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.267008 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.267021 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.267036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.267048 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:33Z","lastTransitionTime":"2026-01-31T16:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.369756 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.369849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.369868 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.369894 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.369914 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:33Z","lastTransitionTime":"2026-01-31T16:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.448758 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:02:40.161428314 +0000 UTC Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.464308 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.464467 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:33 crc kubenswrapper[4730]: E0131 16:31:33.464504 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.464573 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:33 crc kubenswrapper[4730]: E0131 16:31:33.464701 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:33 crc kubenswrapper[4730]: E0131 16:31:33.464927 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.480064 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.480161 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.480254 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.480273 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.480285 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:33Z","lastTransitionTime":"2026-01-31T16:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.583149 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.583227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.583247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.583273 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.583292 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:33Z","lastTransitionTime":"2026-01-31T16:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.686117 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.686175 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.686196 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.686220 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.686238 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:33Z","lastTransitionTime":"2026-01-31T16:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.789749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.789818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.789837 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.789862 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.789929 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:33Z","lastTransitionTime":"2026-01-31T16:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.893332 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.893425 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.893443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.893480 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.893607 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:33Z","lastTransitionTime":"2026-01-31T16:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.997729 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.997844 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.997873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.997906 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:33 crc kubenswrapper[4730]: I0131 16:31:33.997932 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:33Z","lastTransitionTime":"2026-01-31T16:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.100739 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.100781 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.100791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.100819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.100834 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:34Z","lastTransitionTime":"2026-01-31T16:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.204016 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.204065 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.204085 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.204112 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.204129 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:34Z","lastTransitionTime":"2026-01-31T16:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.307244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.307291 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.307306 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.307333 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.307351 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:34Z","lastTransitionTime":"2026-01-31T16:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.410370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.410418 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.410431 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.410451 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.410465 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:34Z","lastTransitionTime":"2026-01-31T16:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.449898 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 21:36:44.448384745 +0000 UTC Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.463849 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:34 crc kubenswrapper[4730]: E0131 16:31:34.464031 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.474766 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.493174 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://628a414aa58b365a660f8745dbacd5fa0ecb2f761e87cb4f6bf2c1b57cfef0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:23Z\\\",\\\"message\\\":\\\"2026-01-31T16:30:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591\\\\n2026-01-31T16:30:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591 to /host/opt/cni/bin/\\\\n2026-01-31T16:30:38Z [verbose] multus-daemon started\\\\n2026-01-31T16:30:38Z [verbose] Readiness Indicator file check\\\\n2026-01-31T16:31:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:31:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.509345 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.512868 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.512948 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.512964 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.512984 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.512997 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:34Z","lastTransitionTime":"2026-01-31T16:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.534603 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:30Z\\\",\\\"message\\\":\\\"59 services_controller.go:445] Built service openshift-route-controller-manager/route-controller-manager LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 16:31:30.421591 6659 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z]\\\\nI0131 16:31:30.421599 6659 services_controller.go:451] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:31:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.549447 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.566751 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53612900-51fd-4d01-9a6f-bc9a3c252f3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c8d849c5465966f2f594e26b08dfd9894c2f0337bba1e90085896ab8d8c5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71180a847d6310a8c7bc6f33e0d092316b4927684618237542ff99951cc4bb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab70a9385676283881a5e8581eea0d5dc9f7a467b10e66ca34dc25efce6c712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.583376 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.598487 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.616506 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.616544 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.616555 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.616402 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.616572 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.616870 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:34Z","lastTransitionTime":"2026-01-31T16:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.635477 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.655702 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.680056 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.700980 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.722279 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.722330 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.722342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.722365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.722378 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:34Z","lastTransitionTime":"2026-01-31T16:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.729639 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.754110 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.773958 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.805950 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a658dfd-cb8b-45c0-873b-1dc5d59d65b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9501b3dd8b1374ffc2f3a6ac550539119be89530a0ab12d946bef8af73ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2e994cdaac0e7e168039fe280eb9849676bbb33e048590faeac4ea93cc9756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8654d12dcd8ad892ee6a5e4f0c0663c9b1040fc0120c47f7e85de62443934b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33545d13e478eb3082cb6b534738ab7f69acf9167e21436ec47b6e48ccbeb4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae4671f2a044112a884a087a077a8bc8f351dafc63bb183ef8c52305b32b245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://569ab9a5bc1684f31b7c934785de94a803f30d7ea366e08f536f1e7acb5bdb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://569ab9a5bc1684f31b7c934785de94a803f30d7ea366e08f536f1e7acb5bdb66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7193da834d3d58d496446f4b26ddaba6e55eee7a386dd0e8e8c9a67e4aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7193da834d3d58d496446f4b26ddaba6e55eee7a386dd0e8e8c9a67e4aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://22091e9f8e205f4abe02d46b2dccf3c86d4e9171f3ddce551bed190e4abf5e04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22091e9f8e205f4abe02d46b2dccf3c86d4e9171f3ddce551bed190e4abf5e04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.823531 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:34Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.825795 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.825858 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.825871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.825903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.825915 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:34Z","lastTransitionTime":"2026-01-31T16:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.929028 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.929082 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.929094 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.929114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:34 crc kubenswrapper[4730]: I0131 16:31:34.929126 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:34Z","lastTransitionTime":"2026-01-31T16:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.031654 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.031685 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.031694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.031739 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.031753 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:35Z","lastTransitionTime":"2026-01-31T16:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.134911 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.134973 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.134990 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.135015 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.135036 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:35Z","lastTransitionTime":"2026-01-31T16:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.237384 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.237434 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.237446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.237463 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.237474 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:35Z","lastTransitionTime":"2026-01-31T16:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.340209 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.340254 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.340267 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.340285 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.340295 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:35Z","lastTransitionTime":"2026-01-31T16:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.443147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.443221 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.443239 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.443266 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.443284 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:35Z","lastTransitionTime":"2026-01-31T16:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.450320 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 03:45:55.195119728 +0000 UTC Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.464065 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.464068 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:35 crc kubenswrapper[4730]: E0131 16:31:35.464195 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.464069 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:35 crc kubenswrapper[4730]: E0131 16:31:35.464417 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:35 crc kubenswrapper[4730]: E0131 16:31:35.464485 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.546859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.546926 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.546946 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.546974 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.546993 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:35Z","lastTransitionTime":"2026-01-31T16:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.649675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.649746 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.649764 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.649795 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.649844 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:35Z","lastTransitionTime":"2026-01-31T16:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.753036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.753107 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.753128 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.753156 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.753176 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:35Z","lastTransitionTime":"2026-01-31T16:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.856023 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.856111 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.856125 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.856167 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.856182 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:35Z","lastTransitionTime":"2026-01-31T16:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.960211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.960309 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.960374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.960401 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:35 crc kubenswrapper[4730]: I0131 16:31:35.960455 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:35Z","lastTransitionTime":"2026-01-31T16:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.063261 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.063324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.063341 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.063365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.063387 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:36Z","lastTransitionTime":"2026-01-31T16:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.166854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.166909 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.166930 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.166954 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.166994 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:36Z","lastTransitionTime":"2026-01-31T16:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.270833 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.270905 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.270927 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.270955 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.270980 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:36Z","lastTransitionTime":"2026-01-31T16:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.374090 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.374158 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.374180 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.374209 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.374230 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:36Z","lastTransitionTime":"2026-01-31T16:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.451356 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 07:54:41.371687157 +0000 UTC Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.463885 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:36 crc kubenswrapper[4730]: E0131 16:31:36.464175 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.476975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.477033 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.477056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.477084 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.477108 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:36Z","lastTransitionTime":"2026-01-31T16:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.580040 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.580089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.580106 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.580126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.580145 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:36Z","lastTransitionTime":"2026-01-31T16:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.684002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.684079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.684103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.684136 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.684159 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:36Z","lastTransitionTime":"2026-01-31T16:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.787671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.788022 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.788031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.788046 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.788054 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:36Z","lastTransitionTime":"2026-01-31T16:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.891657 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.892051 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.892206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.892357 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.892495 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:36Z","lastTransitionTime":"2026-01-31T16:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.996138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.996550 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.996741 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.996933 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:36 crc kubenswrapper[4730]: I0131 16:31:36.997073 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:36Z","lastTransitionTime":"2026-01-31T16:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.100546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.100641 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.100664 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.100697 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.100718 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:37Z","lastTransitionTime":"2026-01-31T16:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.204195 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.204523 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.204688 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.204859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.204988 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:37Z","lastTransitionTime":"2026-01-31T16:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.309061 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.309123 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.309147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.309178 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.309203 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:37Z","lastTransitionTime":"2026-01-31T16:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.413066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.413125 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.413142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.413167 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.413184 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:37Z","lastTransitionTime":"2026-01-31T16:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.452204 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:01:16.106292091 +0000 UTC Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.463563 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.463608 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.463984 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:37 crc kubenswrapper[4730]: E0131 16:31:37.464237 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:37 crc kubenswrapper[4730]: E0131 16:31:37.464641 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:37 crc kubenswrapper[4730]: E0131 16:31:37.465049 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.479619 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.516765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.516848 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.516869 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.516892 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.516909 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:37Z","lastTransitionTime":"2026-01-31T16:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.619645 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.619711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.619734 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.619763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.619785 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:37Z","lastTransitionTime":"2026-01-31T16:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.722900 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.722947 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.722965 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.722987 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.723003 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:37Z","lastTransitionTime":"2026-01-31T16:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.826092 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.826144 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.826160 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.826183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.826198 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:37Z","lastTransitionTime":"2026-01-31T16:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.929181 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.929225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.929242 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.929263 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:37 crc kubenswrapper[4730]: I0131 16:31:37.929279 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:37Z","lastTransitionTime":"2026-01-31T16:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.031973 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.032019 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.032035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.032056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.032072 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:38Z","lastTransitionTime":"2026-01-31T16:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.134455 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.134519 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.134542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.134570 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.134592 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:38Z","lastTransitionTime":"2026-01-31T16:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.226871 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.226990 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.227038 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.227095 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:38 crc kubenswrapper[4730]: E0131 16:31:38.227134 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:42.227096642 +0000 UTC m=+149.033153598 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.227209 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:38 crc kubenswrapper[4730]: E0131 16:31:38.227241 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 16:31:38 crc kubenswrapper[4730]: E0131 16:31:38.227258 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 16:31:38 crc kubenswrapper[4730]: E0131 16:31:38.227310 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 16:31:38 crc kubenswrapper[4730]: E0131 16:31:38.227336 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 16:32:42.227311148 +0000 UTC m=+149.033368094 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 16:31:38 crc kubenswrapper[4730]: E0131 16:31:38.227336 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:31:38 crc kubenswrapper[4730]: E0131 16:31:38.227356 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 16:31:38 crc kubenswrapper[4730]: E0131 16:31:38.227386 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 16:31:38 crc kubenswrapper[4730]: E0131 16:31:38.227419 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 16:31:38 crc kubenswrapper[4730]: E0131 16:31:38.227443 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:31:38 crc kubenswrapper[4730]: E0131 16:31:38.227419 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 16:32:42.227395841 +0000 UTC m=+149.033452797 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:31:38 crc kubenswrapper[4730]: E0131 16:31:38.227548 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 16:32:42.227525655 +0000 UTC m=+149.033582601 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 16:31:38 crc kubenswrapper[4730]: E0131 16:31:38.227578 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 16:32:42.227561456 +0000 UTC m=+149.033618422 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.237497 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.237546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.237562 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.237583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.237598 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:38Z","lastTransitionTime":"2026-01-31T16:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.340436 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.340493 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.340508 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.340529 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.340545 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:38Z","lastTransitionTime":"2026-01-31T16:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.443624 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.443680 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.443696 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.443719 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.443737 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:38Z","lastTransitionTime":"2026-01-31T16:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.452858 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 10:32:25.06587932 +0000 UTC Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.463481 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:38 crc kubenswrapper[4730]: E0131 16:31:38.463703 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.547535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.547597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.547618 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.547644 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.547667 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:38Z","lastTransitionTime":"2026-01-31T16:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.651329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.651392 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.651415 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.651445 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.651467 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:38Z","lastTransitionTime":"2026-01-31T16:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.754192 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.754236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.754252 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.754273 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.754291 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:38Z","lastTransitionTime":"2026-01-31T16:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.857530 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.857597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.857609 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.857632 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.857645 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:38Z","lastTransitionTime":"2026-01-31T16:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.961458 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.961516 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.961532 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.961556 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:38 crc kubenswrapper[4730]: I0131 16:31:38.961574 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:38Z","lastTransitionTime":"2026-01-31T16:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.064555 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.064622 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.064646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.064675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.064696 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:39Z","lastTransitionTime":"2026-01-31T16:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.168232 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.168346 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.168416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.168453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.168474 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:39Z","lastTransitionTime":"2026-01-31T16:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.271577 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.271648 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.271673 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.271702 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.271724 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:39Z","lastTransitionTime":"2026-01-31T16:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.375148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.375206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.375223 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.375247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.375263 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:39Z","lastTransitionTime":"2026-01-31T16:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.453977 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 00:36:49.726388861 +0000 UTC Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.463273 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:39 crc kubenswrapper[4730]: E0131 16:31:39.463454 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.463548 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:39 crc kubenswrapper[4730]: E0131 16:31:39.463640 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.463708 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:39 crc kubenswrapper[4730]: E0131 16:31:39.463783 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.478507 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.478587 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.478604 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.478623 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.478639 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:39Z","lastTransitionTime":"2026-01-31T16:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.581520 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.581593 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.581611 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.581638 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.581684 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:39Z","lastTransitionTime":"2026-01-31T16:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.684487 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.684546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.684562 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.684584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.684601 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:39Z","lastTransitionTime":"2026-01-31T16:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.787736 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.787787 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.787829 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.787859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.787879 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:39Z","lastTransitionTime":"2026-01-31T16:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.890790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.890890 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.890910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.890936 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.890952 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:39Z","lastTransitionTime":"2026-01-31T16:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.994032 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.994101 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.994124 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.994145 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:39 crc kubenswrapper[4730]: I0131 16:31:39.994161 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:39Z","lastTransitionTime":"2026-01-31T16:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.097409 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.097745 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.097965 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.098157 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.098336 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:40Z","lastTransitionTime":"2026-01-31T16:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.201439 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.201503 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.201521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.201545 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.201562 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:40Z","lastTransitionTime":"2026-01-31T16:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.305375 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.305438 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.305456 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.305485 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.305507 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:40Z","lastTransitionTime":"2026-01-31T16:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.408667 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.408753 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.408782 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.408845 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.408869 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:40Z","lastTransitionTime":"2026-01-31T16:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.454868 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:49:25.307117794 +0000 UTC Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.463176 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:40 crc kubenswrapper[4730]: E0131 16:31:40.463342 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.511654 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.511782 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.511837 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.511868 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.511889 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:40Z","lastTransitionTime":"2026-01-31T16:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.614873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.614935 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.614975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.615002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.615023 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:40Z","lastTransitionTime":"2026-01-31T16:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.718137 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.718188 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.718204 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.718222 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.718245 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:40Z","lastTransitionTime":"2026-01-31T16:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.821354 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.822039 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.822216 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.822359 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.822486 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:40Z","lastTransitionTime":"2026-01-31T16:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.926684 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.926749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.926765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.926789 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:40 crc kubenswrapper[4730]: I0131 16:31:40.926833 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:40Z","lastTransitionTime":"2026-01-31T16:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.030248 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.030620 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.030763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.030964 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.031102 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:41Z","lastTransitionTime":"2026-01-31T16:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.137053 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.137096 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.137107 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.137124 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.137135 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:41Z","lastTransitionTime":"2026-01-31T16:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.240320 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.240637 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.240921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.241117 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.241221 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:41Z","lastTransitionTime":"2026-01-31T16:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.343547 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.343590 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.343609 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.343626 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.343634 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:41Z","lastTransitionTime":"2026-01-31T16:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.446240 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.446312 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.446337 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.446402 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.446445 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:41Z","lastTransitionTime":"2026-01-31T16:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.455439 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:10:35.90236266 +0000 UTC Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.463438 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.463439 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.463569 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:41 crc kubenswrapper[4730]: E0131 16:31:41.463710 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:41 crc kubenswrapper[4730]: E0131 16:31:41.463885 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:41 crc kubenswrapper[4730]: E0131 16:31:41.464014 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.550304 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.550373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.550396 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.550509 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.550539 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:41Z","lastTransitionTime":"2026-01-31T16:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.654647 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.654704 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.654721 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.654744 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.654762 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:41Z","lastTransitionTime":"2026-01-31T16:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.758719 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.758791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.758850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.758875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.758892 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:41Z","lastTransitionTime":"2026-01-31T16:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.862655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.862729 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.862751 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.862776 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.862795 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:41Z","lastTransitionTime":"2026-01-31T16:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.966414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.966871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.967044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.967251 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.967416 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:41Z","lastTransitionTime":"2026-01-31T16:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.977374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.977441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.977462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.977490 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:41 crc kubenswrapper[4730]: I0131 16:31:41.977509 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:41Z","lastTransitionTime":"2026-01-31T16:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:41 crc kubenswrapper[4730]: E0131 16:31:41.999397 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:41Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.007240 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.007315 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.007338 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.007366 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.007385 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:42Z","lastTransitionTime":"2026-01-31T16:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:42 crc kubenswrapper[4730]: E0131 16:31:42.029685 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.035429 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.035505 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.035530 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.035565 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.035593 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:42Z","lastTransitionTime":"2026-01-31T16:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:42 crc kubenswrapper[4730]: E0131 16:31:42.060338 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.066001 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.066075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.066100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.066132 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.066155 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:42Z","lastTransitionTime":"2026-01-31T16:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:42 crc kubenswrapper[4730]: E0131 16:31:42.088848 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.093763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.093887 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.093916 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.093949 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.093978 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:42Z","lastTransitionTime":"2026-01-31T16:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:42 crc kubenswrapper[4730]: E0131 16:31:42.114260 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:42Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:42 crc kubenswrapper[4730]: E0131 16:31:42.114518 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.117328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.117386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.117405 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.117429 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.117450 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:42Z","lastTransitionTime":"2026-01-31T16:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.221034 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.221081 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.221100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.221118 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.221131 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:42Z","lastTransitionTime":"2026-01-31T16:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.323487 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.323546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.323563 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.323588 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.323609 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:42Z","lastTransitionTime":"2026-01-31T16:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.427474 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.427540 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.427557 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.427580 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.427598 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:42Z","lastTransitionTime":"2026-01-31T16:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.455918 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 14:41:11.302408018 +0000 UTC Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.463378 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:42 crc kubenswrapper[4730]: E0131 16:31:42.463546 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.530319 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.530353 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.530363 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.530376 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.530385 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:42Z","lastTransitionTime":"2026-01-31T16:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.633252 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.633342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.633369 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.633394 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.633414 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:42Z","lastTransitionTime":"2026-01-31T16:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.737015 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.737068 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.737085 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.737109 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.737151 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:42Z","lastTransitionTime":"2026-01-31T16:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.840012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.840047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.840055 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.840066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.840076 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:42Z","lastTransitionTime":"2026-01-31T16:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.943650 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.943711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.943726 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.943749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:42 crc kubenswrapper[4730]: I0131 16:31:42.943772 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:42Z","lastTransitionTime":"2026-01-31T16:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.046102 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.046172 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.046184 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.046201 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.046228 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:43Z","lastTransitionTime":"2026-01-31T16:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.149285 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.149420 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.149495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.149530 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.149600 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:43Z","lastTransitionTime":"2026-01-31T16:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.252934 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.253023 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.253047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.253075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.253098 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:43Z","lastTransitionTime":"2026-01-31T16:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.356264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.356311 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.356324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.356342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.356354 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:43Z","lastTransitionTime":"2026-01-31T16:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.456326 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:52:11.112247077 +0000 UTC Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.458791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.458850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.458862 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.458898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.458910 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:43Z","lastTransitionTime":"2026-01-31T16:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.464002 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.464114 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.464193 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:43 crc kubenswrapper[4730]: E0131 16:31:43.464328 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:43 crc kubenswrapper[4730]: E0131 16:31:43.464426 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:43 crc kubenswrapper[4730]: E0131 16:31:43.464817 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.465179 4730 scope.go:117] "RemoveContainer" containerID="8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731" Jan 31 16:31:43 crc kubenswrapper[4730]: E0131 16:31:43.465346 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.560901 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.560935 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.560942 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.560954 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.560962 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:43Z","lastTransitionTime":"2026-01-31T16:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.664345 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.664423 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.664443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.664473 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.664494 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:43Z","lastTransitionTime":"2026-01-31T16:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.767381 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.767452 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.767510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.767541 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.767560 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:43Z","lastTransitionTime":"2026-01-31T16:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.870371 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.870439 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.870457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.870480 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.870498 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:43Z","lastTransitionTime":"2026-01-31T16:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.973442 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.973498 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.973516 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.973538 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:43 crc kubenswrapper[4730]: I0131 16:31:43.973557 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:43Z","lastTransitionTime":"2026-01-31T16:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.076548 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.076633 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.076655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.076688 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.076709 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:44Z","lastTransitionTime":"2026-01-31T16:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.180069 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.180182 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.180207 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.180235 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.180259 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:44Z","lastTransitionTime":"2026-01-31T16:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.283618 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.283738 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.283826 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.283858 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.283879 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:44Z","lastTransitionTime":"2026-01-31T16:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.386749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.386796 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.386830 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.386851 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.386864 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:44Z","lastTransitionTime":"2026-01-31T16:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.457394 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 22:37:21.417559196 +0000 UTC Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.464251 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:44 crc kubenswrapper[4730]: E0131 16:31:44.465295 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.486501 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.490363 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.490453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.490472 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.490528 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.490620 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:44Z","lastTransitionTime":"2026-01-31T16:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.505674 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.529354 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53612900-51fd-4d01-9a6f-bc9a3c252f3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c8d849c5465966f2f594e26b08dfd9894c2f0337bba1e90085896ab8d8c5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71180a847d6310a8c7bc6f33e0d092316b4927684618237542ff99951cc4bb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab70a9385676283881a5e8581eea0d5dc9f7a467b10e66ca34dc25efce6c712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.550596 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.570547 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.593981 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.594109 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.594175 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.594205 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.594262 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:44Z","lastTransitionTime":"2026-01-31T16:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.599443 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.617279 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.635621 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.670215 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a658dfd-cb8b-45c0-873b-1dc5d59d65b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9501b3dd8b1374ffc2f3a6ac550539119be89530a0ab12d946bef8af73ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2e994cdaac0e7e168039fe280eb9849676bbb33e048590faeac4ea93cc9756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8654d12dcd8ad892ee6a5e4f0c0663c9b1040fc0120c47f7e85de62443934b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33545d13e478eb3082cb6b534738ab7f69acf9167e21436ec47b6e48ccbeb4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae4671f2a044112a884a087a077a8bc8f351dafc63bb183ef8c52305b32b245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://569ab9a5bc1684f31b7c934785de94a803f30d7ea366e08f536f1e7acb5bdb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://569ab9a5bc1684f31b7c934785de94a803f30d7ea366e08f536f1e7acb5bdb66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7193da834d3d58d496446f4b26ddaba6e55eee7a386dd0e8e8c9a67e4aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7193da834d3d58d496446f4b26ddaba6e55eee7a386dd0e8e8c9a67e4aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://22091e9f8e205f4abe02d46b2dccf3c86d4e9171f3ddce551bed190e4abf5e04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22091e9f8e205f4abe02d46b2dccf3c86d4e9171f3ddce551bed190e4abf5e04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.687551 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.697236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.697289 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.697308 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.697332 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.697349 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:44Z","lastTransitionTime":"2026-01-31T16:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.708446 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.727998 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.749694 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.769428 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://628a414aa58b365a660f8745dbacd5fa0ecb2f761e87cb4f6bf2c1b57cfef0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:23Z\\\",\\\"message\\\":\\\"2026-01-31T16:30:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591\\\\n2026-01-31T16:30:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591 to /host/opt/cni/bin/\\\\n2026-01-31T16:30:38Z [verbose] multus-daemon started\\\\n2026-01-31T16:30:38Z [verbose] Readiness Indicator file check\\\\n2026-01-31T16:31:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:31:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.782695 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.797988 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.799445 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.799488 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.799500 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.799516 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.799529 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:44Z","lastTransitionTime":"2026-01-31T16:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.813154 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4420071-33fa-480b-8955-bf03c8e3dd3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2505818d810a7e94e8b9705a3938c35e4911506d30ae620ea3fc35179d375a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a18227efec307a6154703749b5e1dad41648745e260982a1d424c58dab97d912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a18227efec307a6154703749b5e1dad41648745e260982a1d424c58dab97d912\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.834497 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.865502 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:30Z\\\",\\\"message\\\":\\\"59 services_controller.go:445] Built service openshift-route-controller-manager/route-controller-manager LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 16:31:30.421591 6659 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z]\\\\nI0131 16:31:30.421599 6659 services_controller.go:451] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:31:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:44Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.902326 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.902384 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.902402 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.902425 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:44 crc kubenswrapper[4730]: I0131 16:31:44.902443 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:44Z","lastTransitionTime":"2026-01-31T16:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.004869 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.004910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.004919 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.004935 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.004946 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:45Z","lastTransitionTime":"2026-01-31T16:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.107386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.107928 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.107938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.107950 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.107972 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:45Z","lastTransitionTime":"2026-01-31T16:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.211910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.211969 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.212001 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.212023 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.212041 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:45Z","lastTransitionTime":"2026-01-31T16:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.314366 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.314395 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.314402 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.314414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.314423 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:45Z","lastTransitionTime":"2026-01-31T16:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.417142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.417175 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.417182 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.417194 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.417203 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:45Z","lastTransitionTime":"2026-01-31T16:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.458036 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:33:47.561614588 +0000 UTC Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.463389 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.463468 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.463389 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:45 crc kubenswrapper[4730]: E0131 16:31:45.463550 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:45 crc kubenswrapper[4730]: E0131 16:31:45.463643 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:45 crc kubenswrapper[4730]: E0131 16:31:45.463883 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.520582 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.520636 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.520653 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.520676 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.520694 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:45Z","lastTransitionTime":"2026-01-31T16:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.625129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.625200 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.625219 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.625247 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.625266 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:45Z","lastTransitionTime":"2026-01-31T16:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.729093 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.729140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.729156 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.729179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.729198 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:45Z","lastTransitionTime":"2026-01-31T16:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.832518 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.832579 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.832599 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.832621 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.832637 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:45Z","lastTransitionTime":"2026-01-31T16:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.936636 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.936699 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.936716 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.936740 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:45 crc kubenswrapper[4730]: I0131 16:31:45.936761 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:45Z","lastTransitionTime":"2026-01-31T16:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.039574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.039642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.039662 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.039762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.039783 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:46Z","lastTransitionTime":"2026-01-31T16:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.143640 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.143693 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.143711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.143733 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.143749 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:46Z","lastTransitionTime":"2026-01-31T16:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.246417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.246477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.246493 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.246516 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.246534 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:46Z","lastTransitionTime":"2026-01-31T16:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.349996 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.350060 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.350076 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.350100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.350120 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:46Z","lastTransitionTime":"2026-01-31T16:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.453249 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.453313 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.453336 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.453360 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.453376 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:46Z","lastTransitionTime":"2026-01-31T16:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.458504 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 16:05:47.44349495 +0000 UTC Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.463964 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:46 crc kubenswrapper[4730]: E0131 16:31:46.464399 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.556415 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.556789 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.556877 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.556945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.556969 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:46Z","lastTransitionTime":"2026-01-31T16:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.660664 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.660728 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.660753 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.660780 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.660799 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:46Z","lastTransitionTime":"2026-01-31T16:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.763879 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.763951 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.763978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.764011 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.764036 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:46Z","lastTransitionTime":"2026-01-31T16:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.867317 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.867367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.867383 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.867408 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.867426 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:46Z","lastTransitionTime":"2026-01-31T16:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.970621 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.970715 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.970736 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.970771 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:46 crc kubenswrapper[4730]: I0131 16:31:46.970794 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:46Z","lastTransitionTime":"2026-01-31T16:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.074478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.074551 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.074567 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.074601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.074619 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:47Z","lastTransitionTime":"2026-01-31T16:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.177570 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.177659 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.177682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.177710 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.177730 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:47Z","lastTransitionTime":"2026-01-31T16:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.280128 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.280211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.280238 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.280262 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.280279 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:47Z","lastTransitionTime":"2026-01-31T16:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.383672 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.383736 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.383760 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.383790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.383847 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:47Z","lastTransitionTime":"2026-01-31T16:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.459735 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:15:49.106498302 +0000 UTC Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.464136 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.464202 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:47 crc kubenswrapper[4730]: E0131 16:31:47.464365 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.464391 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:47 crc kubenswrapper[4730]: E0131 16:31:47.464525 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:47 crc kubenswrapper[4730]: E0131 16:31:47.465236 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.486792 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.486902 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.486921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.486945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.486963 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:47Z","lastTransitionTime":"2026-01-31T16:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.591399 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.591546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.591571 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.591610 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.591636 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:47Z","lastTransitionTime":"2026-01-31T16:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.695882 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.695936 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.695946 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.695968 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.695980 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:47Z","lastTransitionTime":"2026-01-31T16:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.798922 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.798954 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.798962 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.799042 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.799055 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:47Z","lastTransitionTime":"2026-01-31T16:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.902131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.902230 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.902253 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.902322 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:47 crc kubenswrapper[4730]: I0131 16:31:47.902346 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:47Z","lastTransitionTime":"2026-01-31T16:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.005942 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.006007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.006042 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.006069 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.006090 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:48Z","lastTransitionTime":"2026-01-31T16:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.108933 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.108987 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.109013 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.109040 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.109338 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:48Z","lastTransitionTime":"2026-01-31T16:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.214172 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.214241 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.214260 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.214285 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.214306 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:48Z","lastTransitionTime":"2026-01-31T16:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.317420 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.317476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.317493 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.317549 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.317566 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:48Z","lastTransitionTime":"2026-01-31T16:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.420903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.421017 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.421051 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.421082 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.421104 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:48Z","lastTransitionTime":"2026-01-31T16:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.460851 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:53:52.917338767 +0000 UTC Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.464300 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:48 crc kubenswrapper[4730]: E0131 16:31:48.464587 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.524839 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.525215 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.525246 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.525268 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.525289 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:48Z","lastTransitionTime":"2026-01-31T16:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.628370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.628440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.628465 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.628495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.628517 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:48Z","lastTransitionTime":"2026-01-31T16:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.731421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.731492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.731510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.731533 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.731552 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:48Z","lastTransitionTime":"2026-01-31T16:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.834992 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.835060 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.835077 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.835104 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.835121 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:48Z","lastTransitionTime":"2026-01-31T16:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.937614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.937660 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.937678 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.937701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:48 crc kubenswrapper[4730]: I0131 16:31:48.937718 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:48Z","lastTransitionTime":"2026-01-31T16:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.040841 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.040906 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.040926 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.040951 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.040968 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:49Z","lastTransitionTime":"2026-01-31T16:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.143843 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.144301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.144789 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.145123 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.145286 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:49Z","lastTransitionTime":"2026-01-31T16:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.249189 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.249263 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.249289 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.249320 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.249342 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:49Z","lastTransitionTime":"2026-01-31T16:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.352859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.352924 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.352950 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.352982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.353004 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:49Z","lastTransitionTime":"2026-01-31T16:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.456172 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.456494 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.456875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.457186 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.457336 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:49Z","lastTransitionTime":"2026-01-31T16:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.461448 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 03:07:30.269435866 +0000 UTC Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.463857 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:49 crc kubenswrapper[4730]: E0131 16:31:49.464275 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.464061 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:49 crc kubenswrapper[4730]: E0131 16:31:49.464858 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.463967 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:49 crc kubenswrapper[4730]: E0131 16:31:49.465420 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.560851 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.560911 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.560929 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.560954 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.560971 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:49Z","lastTransitionTime":"2026-01-31T16:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.664993 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.665078 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.665125 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.665151 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.665167 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:49Z","lastTransitionTime":"2026-01-31T16:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.768938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.769294 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.769607 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.769847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.770025 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:49Z","lastTransitionTime":"2026-01-31T16:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.873661 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.874018 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.874178 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.874323 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.874459 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:49Z","lastTransitionTime":"2026-01-31T16:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.977949 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.978019 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.978046 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.978077 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:49 crc kubenswrapper[4730]: I0131 16:31:49.978101 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:49Z","lastTransitionTime":"2026-01-31T16:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.081193 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.081306 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.081332 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.081504 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.081535 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:50Z","lastTransitionTime":"2026-01-31T16:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.184197 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.184249 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.184279 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.184305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.184323 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:50Z","lastTransitionTime":"2026-01-31T16:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.287256 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.287299 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.287314 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.287338 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.287354 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:50Z","lastTransitionTime":"2026-01-31T16:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.390883 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.391001 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.391026 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.391055 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.391077 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:50Z","lastTransitionTime":"2026-01-31T16:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.464962 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:44:53.063628806 +0000 UTC Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.467006 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:50 crc kubenswrapper[4730]: E0131 16:31:50.467553 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.493519 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.493848 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.494056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.494198 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.494314 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:50Z","lastTransitionTime":"2026-01-31T16:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.597770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.597828 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.597840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.597855 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.597866 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:50Z","lastTransitionTime":"2026-01-31T16:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.701540 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.701589 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.701606 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.701630 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.701649 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:50Z","lastTransitionTime":"2026-01-31T16:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.804234 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.804301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.804371 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.804410 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.804429 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:50Z","lastTransitionTime":"2026-01-31T16:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.907723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.907780 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.907824 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.907851 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:50 crc kubenswrapper[4730]: I0131 16:31:50.907868 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:50Z","lastTransitionTime":"2026-01-31T16:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.010551 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.010610 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.010626 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.010650 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.010668 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:51Z","lastTransitionTime":"2026-01-31T16:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.114065 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.114111 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.114129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.114152 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.114171 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:51Z","lastTransitionTime":"2026-01-31T16:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.217318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.217386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.217409 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.217437 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.217461 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:51Z","lastTransitionTime":"2026-01-31T16:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.319827 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.319886 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.319899 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.319918 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.319930 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:51Z","lastTransitionTime":"2026-01-31T16:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.422279 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.422319 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.422329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.422345 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.422358 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:51Z","lastTransitionTime":"2026-01-31T16:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.464059 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.464071 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.464095 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:51 crc kubenswrapper[4730]: E0131 16:31:51.464194 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:51 crc kubenswrapper[4730]: E0131 16:31:51.464313 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:51 crc kubenswrapper[4730]: E0131 16:31:51.464609 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.465110 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:20:32.74870596 +0000 UTC Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.524738 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.524842 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.524869 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.524894 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.524913 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:51Z","lastTransitionTime":"2026-01-31T16:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.627929 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.627991 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.628009 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.628034 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.628053 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:51Z","lastTransitionTime":"2026-01-31T16:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.731467 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.731525 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.731541 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.731564 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.731581 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:51Z","lastTransitionTime":"2026-01-31T16:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.834973 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.835072 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.835103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.835130 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.835154 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:51Z","lastTransitionTime":"2026-01-31T16:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.937599 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.937648 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.937664 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.937689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:51 crc kubenswrapper[4730]: I0131 16:31:51.937706 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:51Z","lastTransitionTime":"2026-01-31T16:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.040602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.040646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.040665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.040686 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.040703 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:52Z","lastTransitionTime":"2026-01-31T16:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.143660 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.143715 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.143732 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.143754 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.143772 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:52Z","lastTransitionTime":"2026-01-31T16:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.246675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.246749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.246774 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.246836 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.246864 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:52Z","lastTransitionTime":"2026-01-31T16:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.350059 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.350119 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.350137 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.350161 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.350179 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:52Z","lastTransitionTime":"2026-01-31T16:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.452781 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.452862 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.452881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.452904 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.452920 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:52Z","lastTransitionTime":"2026-01-31T16:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.464297 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:52 crc kubenswrapper[4730]: E0131 16:31:52.464476 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.466227 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 14:59:52.165710369 +0000 UTC Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.497018 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.497065 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.497081 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.497101 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.497117 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:52Z","lastTransitionTime":"2026-01-31T16:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:52 crc kubenswrapper[4730]: E0131 16:31:52.518114 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:52Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.523334 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.523390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.523407 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.523436 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.523454 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:52Z","lastTransitionTime":"2026-01-31T16:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:52 crc kubenswrapper[4730]: E0131 16:31:52.543705 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:52Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.548528 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.548580 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.548597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.548642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.548659 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:52Z","lastTransitionTime":"2026-01-31T16:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:52 crc kubenswrapper[4730]: E0131 16:31:52.567671 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:52Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.572992 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.573043 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.573059 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.573080 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.573096 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:52Z","lastTransitionTime":"2026-01-31T16:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:52 crc kubenswrapper[4730]: E0131 16:31:52.593315 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:52Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.597918 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.597985 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.598002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.598025 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.598041 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:52Z","lastTransitionTime":"2026-01-31T16:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:52 crc kubenswrapper[4730]: E0131 16:31:52.621472 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:52Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:52 crc kubenswrapper[4730]: E0131 16:31:52.621707 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.624164 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.624209 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.624225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.624249 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.624267 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:52Z","lastTransitionTime":"2026-01-31T16:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.728390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.728455 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.728472 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.728495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.728512 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:52Z","lastTransitionTime":"2026-01-31T16:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.830971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.831033 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.831049 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.831073 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.831090 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:52Z","lastTransitionTime":"2026-01-31T16:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.934128 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.934189 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.934208 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.934235 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:52 crc kubenswrapper[4730]: I0131 16:31:52.934256 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:52Z","lastTransitionTime":"2026-01-31T16:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.037291 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.037362 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.037379 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.037403 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.037420 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:53Z","lastTransitionTime":"2026-01-31T16:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.140452 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.140726 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.140921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.141067 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.141228 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:53Z","lastTransitionTime":"2026-01-31T16:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.243933 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.244229 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.244383 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.244559 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.244752 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:53Z","lastTransitionTime":"2026-01-31T16:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.347381 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.347443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.347460 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.347483 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.347500 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:53Z","lastTransitionTime":"2026-01-31T16:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.450527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.450585 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.450602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.450627 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.450644 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:53Z","lastTransitionTime":"2026-01-31T16:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.464009 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.464017 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.464075 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:53 crc kubenswrapper[4730]: E0131 16:31:53.464425 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:53 crc kubenswrapper[4730]: E0131 16:31:53.464562 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:53 crc kubenswrapper[4730]: E0131 16:31:53.464720 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.467047 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 06:15:45.028720043 +0000 UTC Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.554325 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.554380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.554396 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.554421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.554441 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:53Z","lastTransitionTime":"2026-01-31T16:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.657295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.657372 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.657399 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.657422 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.657440 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:53Z","lastTransitionTime":"2026-01-31T16:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.759224 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.759264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.759277 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.759295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.759310 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:53Z","lastTransitionTime":"2026-01-31T16:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.810403 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs\") pod \"network-metrics-daemon-sg8lw\" (UID: \"39ef74a4-f27d-498b-8bbd-aae64590d030\") " pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:53 crc kubenswrapper[4730]: E0131 16:31:53.810580 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 16:31:53 crc kubenswrapper[4730]: E0131 16:31:53.810646 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs podName:39ef74a4-f27d-498b-8bbd-aae64590d030 nodeName:}" failed. No retries permitted until 2026-01-31 16:32:57.810625634 +0000 UTC m=+164.616682570 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs") pod "network-metrics-daemon-sg8lw" (UID: "39ef74a4-f27d-498b-8bbd-aae64590d030") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.862369 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.862407 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.862418 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.862436 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.862447 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:53Z","lastTransitionTime":"2026-01-31T16:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.965843 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.965920 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.965943 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.965972 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:53 crc kubenswrapper[4730]: I0131 16:31:53.966030 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:53Z","lastTransitionTime":"2026-01-31T16:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.068943 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.069007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.069023 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.069047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.069065 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:54Z","lastTransitionTime":"2026-01-31T16:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.171514 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.171580 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.171599 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.171634 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.171674 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:54Z","lastTransitionTime":"2026-01-31T16:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.275200 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.275289 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.275343 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.275365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.275381 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:54Z","lastTransitionTime":"2026-01-31T16:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.378239 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.378296 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.378314 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.378339 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.378358 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:54Z","lastTransitionTime":"2026-01-31T16:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.463700 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:54 crc kubenswrapper[4730]: E0131 16:31:54.463928 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.467981 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 01:37:26.674933572 +0000 UTC Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.478283 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be9957a709834221dcc7ce5c49bcec3466b64454e59a8b5464c29aad54ff1491\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.481039 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.481106 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.481126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.481155 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.481173 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:54Z","lastTransitionTime":"2026-01-31T16:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.496735 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e3986dd90a67e8981ed9ac616a88e34fa767b5d5584fdceedea3c34a89a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee51f85adcaa19e7db47536a55fb92eaad81423f5c772dca1bc0bace77161830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.512974 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.533794 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bndmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77b7e075-5b61-4efb-9138-4a40f1588cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b602247c28463b7dc6dc3cb13a7934a74625602c7e828c82d9c2c155c7613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b02d5d1f0f930c6a1987a63ed6ab332dd318d32861d212dacb7332a62bb8ef4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7314d959bf55e7f274e6c1355ab2661f1acde6882f9e2394c95662d600a3486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6562724cd3c0b3f2a27189ca880d0ddb1d719b811a9353d46ed685063b0a2e65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f916d691691fb4096a8f2b724a9cc853c0dae0d87d9c4ba4982e26ad3831a285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b1b0a4e30c5304695fb1ea5014ea494487ef93142f879ca8ce79e5a9054acc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dca624fcf64489ce8d036c255f56c1e748f9463d4b8d759da6991862ba7c04b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-79ld7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bndmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.548311 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5f4md" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3579c4f-c5ac-4bbb-b907-d472dcf735fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://791da694426fe7d4afe125f1d18ecc199ef99912a6a37a3b2cea83caf4808141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tpc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5f4md\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.563778 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbebb1-b682-4013-a2d5-7ca2f47f03e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fa8c493a2d761884c30e01ea8b61a5f37236dc220b0872c65b809b5e42f0493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgxsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mzg47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.583603 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.583708 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.583768 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.583855 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.583917 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:54Z","lastTransitionTime":"2026-01-31T16:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.594073 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a658dfd-cb8b-45c0-873b-1dc5d59d65b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82c9501b3dd8b1374ffc2f3a6ac550539119be89530a0ab12d946bef8af73ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2e994cdaac0e7e168039fe280eb9849676bbb33e048590faeac4ea93cc9756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8654d12dcd8ad892ee6a5e4f0c0663c9b1040fc0120c47f7e85de62443934b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33545d13e478eb3082cb6b534738ab7f69acf9167e21436ec47b6e48ccbeb4c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae4671f2a044112a884a087a077a8bc8f351dafc63bb183ef8c52305b32b245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://569ab9a5bc1684f31b7c934785de94a803f30d7ea366e08f536f1e7acb5bdb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://569ab9a5bc1684f31b7c934785de94a803f30d7ea366e08f536f1e7acb5bdb66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7193da834d3d58d496446f4b26ddaba6e55eee7a386dd0e8e8c9a67e4aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7193da834d3d58d496446f4b26ddaba6e55eee7a386dd0e8e8c9a67e4aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://22091e9f8e205f4abe02d46b2dccf3c86d4e9171f3ddce551bed190e4abf5e04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22091e9f8e205f4abe02d46b2dccf3c86d4e9171f3ddce551bed190e4abf5e04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.610582 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.625865 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39ef74a4-f27d-498b-8bbd-aae64590d030\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvw5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sg8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.650865 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c8lpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1c5cbc-307d-4556-b162-2c5c0103662d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://628a414aa58b365a660f8745dbacd5fa0ecb2f761e87cb4f6bf2c1b57cfef0f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:23Z\\\",\\\"message\\\":\\\"2026-01-31T16:30:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591\\\\n2026-01-31T16:30:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b2640051-f750-42a0-ad01-39218745e591 to /host/opt/cni/bin/\\\\n2026-01-31T16:30:38Z [verbose] multus-daemon started\\\\n2026-01-31T16:30:38Z [verbose] Readiness Indicator file check\\\\n2026-01-31T16:31:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:31:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c8lpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.666365 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7p26r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb1945b-e8d1-4041-bdf9-24573064e93a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45184eb0e5595a59abf4e5f1acee2834bee3db9fd690f063e5256d1a11eebe2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ld9hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7p26r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.689587 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.689647 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.689664 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.689688 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.689705 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:54Z","lastTransitionTime":"2026-01-31T16:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.695321 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e53a6e0-ca28-4088-8ced-22ba134f316e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T16:31:30Z\\\",\\\"message\\\":\\\"59 services_controller.go:445] Built service openshift-route-controller-manager/route-controller-manager LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 16:31:30.421591 6659 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:30Z is after 2025-08-24T17:21:41Z]\\\\nI0131 16:31:30.421599 6659 services_controller.go:451] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:31:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlj7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-25nsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.710277 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4420071-33fa-480b-8955-bf03c8e3dd3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2505818d810a7e94e8b9705a3938c35e4911506d30ae620ea3fc35179d375a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a18227efec307a6154703749b5e1dad41648745e260982a1d424c58dab97d912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a18227efec307a6154703749b5e1dad41648745e260982a1d424c58dab97d912\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.726181 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.741063 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53612900-51fd-4d01-9a6f-bc9a3c252f3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:31:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c8d849c5465966f2f594e26b08dfd9894c2f0337bba1e90085896ab8d8c5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71180a847d6310a8c7bc6f33e0d092316b4927684618237542ff99951cc4bb46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab70a9385676283881a5e8581eea0d5dc9f7a467b10e66ca34dc25efce6c712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22361dfe64cddaa4a214b8cb809957077fb87ce30a0d791b6a65c6e9fb258ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.764113 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c75507437612007620093aad37547be14b8bfcb7cd0bce342109abcc059f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.780709 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb56b3f-38e1-40f3-b28a-bfd1b3f50188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9c845dc8d73f5772d7b12355f1e44b3d87a285b2084793ba2a548a93ce1aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5cf24c496c1b14047ffda4e5f6d59d18eceaaca6428177b27978ce8d4b2882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fd5k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6p6cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.792850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.793051 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.793153 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.793256 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.793361 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:54Z","lastTransitionTime":"2026-01-31T16:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.800634 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a821a82c-cea5-41e2-aa16-abfb02c7e54c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T16:30:33Z\\\",\\\"message\\\":\\\"'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 16:30:33.672170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 16:30:33.672173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 16:30:33.672355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 16:30:33.679330 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769877017\\\\\\\\\\\\\\\" (2026-01-31 16:30:17 +0000 UTC to 2026-03-02 16:30:18 +0000 UTC (now=2026-01-31 16:30:33.679272837 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679570 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769877028\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769877028\\\\\\\\\\\\\\\" (2026-01-31 15:30:28 +0000 UTC to 2027-01-31 15:30:28 +0000 UTC (now=2026-01-31 16:30:33.679540335 +0000 UTC))\\\\\\\"\\\\nI0131 16:30:33.679601 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0131 16:30:33.679631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0131 16:30:33.680157 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0131 16:30:33.680174 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1652759360/tls.crt::/tmp/serving-cert-1652759360/tls.key\\\\\\\"\\\\nI0131 16:30:33.680265 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0131 16:30:33.680328 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 16:30:33.680263 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T16:30:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.820094 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08951bd9-f798-4175-9860-3390d73263f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T16:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2884ab986397affb64c1c6d73eed89073274b418a0bedca6d41dc3c3a8282931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acd9a08b9f9a6bb3be8c8d7a0a759e09041cd31e3287fce6ffea507feb8f2884\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dca7d7ea46ba48ce9473cbd7b2b01802e2bf97ad8936a5e91fee5d9072ea42a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T16:30:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:31:54Z is after 2025-08-24T17:21:41Z" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.896601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.896666 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.896683 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.896709 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.896726 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:54Z","lastTransitionTime":"2026-01-31T16:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.999630 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.999688 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:54 crc kubenswrapper[4730]: I0131 16:31:54.999710 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:54.999739 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:54.999761 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:54Z","lastTransitionTime":"2026-01-31T16:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.102954 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.103002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.103020 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.103043 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.103060 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:55Z","lastTransitionTime":"2026-01-31T16:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.206742 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.206855 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.206870 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.206888 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.206900 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:55Z","lastTransitionTime":"2026-01-31T16:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.309761 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.309838 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.309872 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.309925 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.309938 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:55Z","lastTransitionTime":"2026-01-31T16:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.412495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.412557 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.412575 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.412602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.412619 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:55Z","lastTransitionTime":"2026-01-31T16:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.463866 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.464026 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:55 crc kubenswrapper[4730]: E0131 16:31:55.464068 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.463904 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:55 crc kubenswrapper[4730]: E0131 16:31:55.464546 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:55 crc kubenswrapper[4730]: E0131 16:31:55.464901 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.465244 4730 scope.go:117] "RemoveContainer" containerID="8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731" Jan 31 16:31:55 crc kubenswrapper[4730]: E0131 16:31:55.465514 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-25nsf_openshift-ovn-kubernetes(8e53a6e0-ca28-4088-8ced-22ba134f316e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.468966 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 11:09:18.509463965 +0000 UTC Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.515334 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.515377 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.515393 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.515416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.515433 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:55Z","lastTransitionTime":"2026-01-31T16:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.618106 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.618156 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.618167 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.618184 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.618196 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:55Z","lastTransitionTime":"2026-01-31T16:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.721114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.721185 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.721202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.721226 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.721244 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:55Z","lastTransitionTime":"2026-01-31T16:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.824648 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.824720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.824736 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.824840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.824870 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:55Z","lastTransitionTime":"2026-01-31T16:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.928173 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.928209 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.928220 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.928234 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:55 crc kubenswrapper[4730]: I0131 16:31:55.928246 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:55Z","lastTransitionTime":"2026-01-31T16:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.031730 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.031791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.031837 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.031861 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.031879 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:56Z","lastTransitionTime":"2026-01-31T16:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.135098 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.135170 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.135211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.135244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.135266 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:56Z","lastTransitionTime":"2026-01-31T16:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.238405 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.238465 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.238486 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.238512 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.238529 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:56Z","lastTransitionTime":"2026-01-31T16:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.341372 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.341449 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.341467 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.341492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.341515 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:56Z","lastTransitionTime":"2026-01-31T16:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.444866 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.445194 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.445324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.445443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.445570 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:56Z","lastTransitionTime":"2026-01-31T16:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.463955 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:56 crc kubenswrapper[4730]: E0131 16:31:56.464125 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.469213 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:43:07.416062199 +0000 UTC Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.548602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.549436 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.549613 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.549916 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.550135 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:56Z","lastTransitionTime":"2026-01-31T16:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.653903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.654256 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.654403 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.654593 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.654734 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:56Z","lastTransitionTime":"2026-01-31T16:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.757327 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.757359 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.757371 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.757387 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.757408 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:56Z","lastTransitionTime":"2026-01-31T16:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.860240 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.860389 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.860406 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.860429 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.860445 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:56Z","lastTransitionTime":"2026-01-31T16:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.962598 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.963012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.963227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.963422 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:56 crc kubenswrapper[4730]: I0131 16:31:56.963590 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:56Z","lastTransitionTime":"2026-01-31T16:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.067316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.067582 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.067710 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.067884 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.068064 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:57Z","lastTransitionTime":"2026-01-31T16:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.171348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.171395 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.171407 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.171425 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.171438 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:57Z","lastTransitionTime":"2026-01-31T16:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.274694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.274767 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.274790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.274840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.274857 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:57Z","lastTransitionTime":"2026-01-31T16:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.377990 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.378070 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.378095 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.378125 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.378148 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:57Z","lastTransitionTime":"2026-01-31T16:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.463499 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.463582 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.463518 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:57 crc kubenswrapper[4730]: E0131 16:31:57.463696 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:57 crc kubenswrapper[4730]: E0131 16:31:57.463866 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:57 crc kubenswrapper[4730]: E0131 16:31:57.464051 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.469719 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 16:22:07.988800731 +0000 UTC Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.481302 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.481348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.481365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.481388 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.481405 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:57Z","lastTransitionTime":"2026-01-31T16:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.584450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.584512 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.584530 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.584559 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.584576 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:57Z","lastTransitionTime":"2026-01-31T16:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.687603 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.687677 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.687695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.687722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.687743 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:57Z","lastTransitionTime":"2026-01-31T16:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.790229 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.790289 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.790305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.790334 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.790361 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:57Z","lastTransitionTime":"2026-01-31T16:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.893625 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.893686 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.893711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.893738 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.893758 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:57Z","lastTransitionTime":"2026-01-31T16:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.996391 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.996449 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.996466 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.996520 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:57 crc kubenswrapper[4730]: I0131 16:31:57.996540 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:57Z","lastTransitionTime":"2026-01-31T16:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.099789 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.099843 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.099852 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.099867 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.099878 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:58Z","lastTransitionTime":"2026-01-31T16:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.202847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.202931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.202949 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.202974 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.202993 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:58Z","lastTransitionTime":"2026-01-31T16:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.305946 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.306004 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.306022 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.306048 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.306073 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:58Z","lastTransitionTime":"2026-01-31T16:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.409617 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.410275 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.410547 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.410847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.411048 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:58Z","lastTransitionTime":"2026-01-31T16:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.464036 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:31:58 crc kubenswrapper[4730]: E0131 16:31:58.464368 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.471586 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:57:16.814284371 +0000 UTC Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.515034 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.515086 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.515103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.515126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.515143 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:58Z","lastTransitionTime":"2026-01-31T16:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.618498 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.618601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.618620 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.618683 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.618788 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:58Z","lastTransitionTime":"2026-01-31T16:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.722130 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.722194 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.722211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.722235 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.722254 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:58Z","lastTransitionTime":"2026-01-31T16:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.824125 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.824180 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.824197 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.824218 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.824236 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:58Z","lastTransitionTime":"2026-01-31T16:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.926703 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.926756 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.926775 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.926794 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:58 crc kubenswrapper[4730]: I0131 16:31:58.926831 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:58Z","lastTransitionTime":"2026-01-31T16:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.030132 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.030203 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.030216 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.030236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.030251 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:59Z","lastTransitionTime":"2026-01-31T16:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.133505 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.133563 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.133580 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.133603 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.133621 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:59Z","lastTransitionTime":"2026-01-31T16:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.237750 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.237832 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.237854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.237880 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.237901 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:59Z","lastTransitionTime":"2026-01-31T16:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.340797 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.340898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.340921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.340949 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.340972 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:59Z","lastTransitionTime":"2026-01-31T16:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.443741 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.443791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.443839 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.443863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.443879 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:59Z","lastTransitionTime":"2026-01-31T16:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.463723 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.463787 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.463834 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:31:59 crc kubenswrapper[4730]: E0131 16:31:59.464048 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:31:59 crc kubenswrapper[4730]: E0131 16:31:59.464291 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:31:59 crc kubenswrapper[4730]: E0131 16:31:59.464381 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.473985 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 15:38:28.592261695 +0000 UTC Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.546322 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.546376 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.546399 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.546425 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.546445 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:59Z","lastTransitionTime":"2026-01-31T16:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.650129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.650176 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.650189 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.650210 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.650224 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:59Z","lastTransitionTime":"2026-01-31T16:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.753516 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.753552 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.753560 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.753574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.753586 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:59Z","lastTransitionTime":"2026-01-31T16:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.856204 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.856270 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.856284 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.856300 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.856314 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:59Z","lastTransitionTime":"2026-01-31T16:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.959609 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.959670 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.959679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.959695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:31:59 crc kubenswrapper[4730]: I0131 16:31:59.959707 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:31:59Z","lastTransitionTime":"2026-01-31T16:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.062866 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.062937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.062955 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.062982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.063002 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:00Z","lastTransitionTime":"2026-01-31T16:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.166624 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.166718 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.166736 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.166763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.166778 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:00Z","lastTransitionTime":"2026-01-31T16:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.269515 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.269575 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.269592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.269614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.269631 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:00Z","lastTransitionTime":"2026-01-31T16:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.372335 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.372425 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.372452 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.372478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.372496 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:00Z","lastTransitionTime":"2026-01-31T16:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.463616 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:00 crc kubenswrapper[4730]: E0131 16:32:00.463855 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.475360 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.475417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.475439 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.475467 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.475489 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:00Z","lastTransitionTime":"2026-01-31T16:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.502728 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 23:16:00.226784237 +0000 UTC Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.578926 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.579016 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.579034 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.579056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.579072 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:00Z","lastTransitionTime":"2026-01-31T16:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.681882 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.681937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.681955 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.681976 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.681993 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:00Z","lastTransitionTime":"2026-01-31T16:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.784148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.784198 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.784215 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.784239 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.784255 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:00Z","lastTransitionTime":"2026-01-31T16:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.887501 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.887541 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.887558 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.887582 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.887600 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:00Z","lastTransitionTime":"2026-01-31T16:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.991117 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.991166 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.991183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.991206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:00 crc kubenswrapper[4730]: I0131 16:32:00.991223 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:00Z","lastTransitionTime":"2026-01-31T16:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.094320 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.094387 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.094405 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.094432 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.094449 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:01Z","lastTransitionTime":"2026-01-31T16:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.197635 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.197687 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.197703 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.197727 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.197744 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:01Z","lastTransitionTime":"2026-01-31T16:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.301328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.301379 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.301394 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.301416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.301432 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:01Z","lastTransitionTime":"2026-01-31T16:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.403707 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.403765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.403782 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.403829 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.403846 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:01Z","lastTransitionTime":"2026-01-31T16:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.463724 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.463850 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.463884 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:01 crc kubenswrapper[4730]: E0131 16:32:01.464013 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:32:01 crc kubenswrapper[4730]: E0131 16:32:01.464154 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:32:01 crc kubenswrapper[4730]: E0131 16:32:01.464309 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.503875 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 00:33:03.195382213 +0000 UTC Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.507235 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.507283 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.507299 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.507319 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.507335 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:01Z","lastTransitionTime":"2026-01-31T16:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.611339 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.611401 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.611420 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.611448 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.611470 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:01Z","lastTransitionTime":"2026-01-31T16:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.714169 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.714214 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.714230 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.714249 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.714262 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:01Z","lastTransitionTime":"2026-01-31T16:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.816652 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.816702 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.816722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.816745 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.816763 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:01Z","lastTransitionTime":"2026-01-31T16:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.919083 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.919138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.919154 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.919178 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:01 crc kubenswrapper[4730]: I0131 16:32:01.919199 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:01Z","lastTransitionTime":"2026-01-31T16:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.022011 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.022083 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.022105 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.022128 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.022145 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:02Z","lastTransitionTime":"2026-01-31T16:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.125215 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.125280 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.125299 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.125324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.125341 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:02Z","lastTransitionTime":"2026-01-31T16:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.233906 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.233978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.233996 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.234018 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.234035 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:02Z","lastTransitionTime":"2026-01-31T16:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.336643 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.336689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.336701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.336720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.336734 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:02Z","lastTransitionTime":"2026-01-31T16:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.439798 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.439883 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.439901 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.439924 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.439941 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:02Z","lastTransitionTime":"2026-01-31T16:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.463515 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:02 crc kubenswrapper[4730]: E0131 16:32:02.463737 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.504845 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 01:52:07.727505469 +0000 UTC Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.543253 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.543609 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.543830 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.544017 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.544177 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:02Z","lastTransitionTime":"2026-01-31T16:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.647633 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.647704 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.647722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.647745 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.647762 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:02Z","lastTransitionTime":"2026-01-31T16:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.751198 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.751255 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.751272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.751296 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.751313 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:02Z","lastTransitionTime":"2026-01-31T16:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.854730 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.854788 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.854849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.854882 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.854906 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:02Z","lastTransitionTime":"2026-01-31T16:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.869524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.869736 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.869933 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.870073 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.870205 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:02Z","lastTransitionTime":"2026-01-31T16:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:02 crc kubenswrapper[4730]: E0131 16:32:02.890496 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:32:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.895023 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.895206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.895341 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.895497 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.895629 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:02Z","lastTransitionTime":"2026-01-31T16:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:02 crc kubenswrapper[4730]: E0131 16:32:02.915593 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:32:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.920884 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.920956 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.920984 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.921054 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.921081 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:02Z","lastTransitionTime":"2026-01-31T16:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:02 crc kubenswrapper[4730]: E0131 16:32:02.941432 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:32:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.946611 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.946836 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.946974 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.947133 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.947295 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:02Z","lastTransitionTime":"2026-01-31T16:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:02 crc kubenswrapper[4730]: E0131 16:32:02.966419 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:32:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.971221 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.971277 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.971295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.971318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.971337 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:02Z","lastTransitionTime":"2026-01-31T16:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:02 crc kubenswrapper[4730]: E0131 16:32:02.989997 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T16:32:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fd417392-7b12-4953-b7d4-8fe09595e010\\\",\\\"systemUUID\\\":\\\"04f37162-2d97-4238-903e-03a07bd637ec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T16:32:02Z is after 2025-08-24T17:21:41Z" Jan 31 16:32:02 crc kubenswrapper[4730]: E0131 16:32:02.990560 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.992681 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.992739 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.992759 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.992783 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:02 crc kubenswrapper[4730]: I0131 16:32:02.992823 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:02Z","lastTransitionTime":"2026-01-31T16:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.095438 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.095489 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.095507 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.095527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.095543 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:03Z","lastTransitionTime":"2026-01-31T16:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.198705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.198762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.198785 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.198846 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.198870 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:03Z","lastTransitionTime":"2026-01-31T16:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.301496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.301556 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.301580 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.301613 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.301637 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:03Z","lastTransitionTime":"2026-01-31T16:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.404877 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.404933 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.404967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.404989 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.405007 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:03Z","lastTransitionTime":"2026-01-31T16:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.464036 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.464181 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:03 crc kubenswrapper[4730]: E0131 16:32:03.464319 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.464523 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:03 crc kubenswrapper[4730]: E0131 16:32:03.464624 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:32:03 crc kubenswrapper[4730]: E0131 16:32:03.464937 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.505747 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 10:09:51.288992502 +0000 UTC Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.508150 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.508195 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.508213 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.508237 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.508255 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:03Z","lastTransitionTime":"2026-01-31T16:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.612165 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.612224 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.612244 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.612269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.612288 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:03Z","lastTransitionTime":"2026-01-31T16:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.715134 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.715223 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.715243 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.715267 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.715287 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:03Z","lastTransitionTime":"2026-01-31T16:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.818483 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.818552 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.818575 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.818604 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.818624 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:03Z","lastTransitionTime":"2026-01-31T16:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.921794 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.921867 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.921883 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.921909 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:03 crc kubenswrapper[4730]: I0131 16:32:03.921928 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:03Z","lastTransitionTime":"2026-01-31T16:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.025381 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.025430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.025446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.025472 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.025490 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:04Z","lastTransitionTime":"2026-01-31T16:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.134052 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.134213 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.134239 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.134285 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.134308 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:04Z","lastTransitionTime":"2026-01-31T16:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.237444 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.237495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.237510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.237532 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.237549 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:04Z","lastTransitionTime":"2026-01-31T16:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.340562 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.340675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.340697 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.340719 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.340736 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:04Z","lastTransitionTime":"2026-01-31T16:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.443271 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.443331 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.443348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.443370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.443387 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:04Z","lastTransitionTime":"2026-01-31T16:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.464193 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:04 crc kubenswrapper[4730]: E0131 16:32:04.464366 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.504242 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-bndmc" podStartSLOduration=89.504174938 podStartE2EDuration="1m29.504174938s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:04.50052072 +0000 UTC m=+111.306577646" watchObservedRunningTime="2026-01-31 16:32:04.504174938 +0000 UTC m=+111.310231894" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.505941 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 19:50:48.154186923 +0000 UTC Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.538963 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5f4md" podStartSLOduration=89.538934464 podStartE2EDuration="1m29.538934464s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:04.522282208 +0000 UTC m=+111.328339164" watchObservedRunningTime="2026-01-31 16:32:04.538934464 +0000 UTC m=+111.344991420" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.546970 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.547031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.547042 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.547059 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.547074 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:04Z","lastTransitionTime":"2026-01-31T16:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.581624 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podStartSLOduration=89.581598315 podStartE2EDuration="1m29.581598315s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:04.539616284 +0000 UTC m=+111.345673230" watchObservedRunningTime="2026-01-31 16:32:04.581598315 +0000 UTC m=+111.387655271" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.583328 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=32.583034928000004 podStartE2EDuration="32.583034928s" podCreationTimestamp="2026-01-31 16:31:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:04.57839147 +0000 UTC m=+111.384448406" watchObservedRunningTime="2026-01-31 16:32:04.583034928 +0000 UTC m=+111.389091884" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.650828 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.650893 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.650907 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.650930 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.650946 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:04Z","lastTransitionTime":"2026-01-31T16:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.696648 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-c8lpn" podStartSLOduration=89.696621662 podStartE2EDuration="1m29.696621662s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:04.694320884 +0000 UTC m=+111.500377810" watchObservedRunningTime="2026-01-31 16:32:04.696621662 +0000 UTC m=+111.502678618" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.729307 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7p26r" podStartSLOduration=89.729281505 podStartE2EDuration="1m29.729281505s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:04.713456024 +0000 UTC m=+111.519512950" watchObservedRunningTime="2026-01-31 16:32:04.729281505 +0000 UTC m=+111.535338441" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.741334 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.741318104 podStartE2EDuration="27.741318104s" podCreationTimestamp="2026-01-31 16:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:04.741151809 +0000 UTC m=+111.547208735" watchObservedRunningTime="2026-01-31 16:32:04.741318104 +0000 UTC m=+111.547375030" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.754006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.754062 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.754074 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.754091 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.754103 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:04Z","lastTransitionTime":"2026-01-31T16:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.807439 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=91.807413353 podStartE2EDuration="1m31.807413353s" podCreationTimestamp="2026-01-31 16:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:04.807200567 +0000 UTC m=+111.613257493" watchObservedRunningTime="2026-01-31 16:32:04.807413353 +0000 UTC m=+111.613470279" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.825226 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=84.825204953 podStartE2EDuration="1m24.825204953s" podCreationTimestamp="2026-01-31 16:30:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:04.825197753 +0000 UTC m=+111.631254719" watchObservedRunningTime="2026-01-31 16:32:04.825204953 +0000 UTC m=+111.631261879" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.838264 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=63.838242332 podStartE2EDuration="1m3.838242332s" podCreationTimestamp="2026-01-31 16:31:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:04.837912242 +0000 UTC m=+111.643969188" watchObservedRunningTime="2026-01-31 16:32:04.838242332 +0000 UTC m=+111.644299268" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.856071 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.856334 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.856434 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.856537 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.856648 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:04Z","lastTransitionTime":"2026-01-31T16:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.868367 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6p6cq" podStartSLOduration=89.868346988 podStartE2EDuration="1m29.868346988s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:04.86806212 +0000 UTC m=+111.674119046" watchObservedRunningTime="2026-01-31 16:32:04.868346988 +0000 UTC m=+111.674403924" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.960863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.960902 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.960917 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.960952 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:04 crc kubenswrapper[4730]: I0131 16:32:04.960968 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:04Z","lastTransitionTime":"2026-01-31T16:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.064656 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.064712 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.064727 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.064749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.064762 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:05Z","lastTransitionTime":"2026-01-31T16:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.167335 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.167663 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.167916 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.168115 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.168267 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:05Z","lastTransitionTime":"2026-01-31T16:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.271186 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.271229 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.271240 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.271257 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.271268 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:05Z","lastTransitionTime":"2026-01-31T16:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.373748 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.373793 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.373846 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.373868 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.373885 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:05Z","lastTransitionTime":"2026-01-31T16:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.464022 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.464084 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.464138 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:05 crc kubenswrapper[4730]: E0131 16:32:05.464295 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:32:05 crc kubenswrapper[4730]: E0131 16:32:05.464587 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:32:05 crc kubenswrapper[4730]: E0131 16:32:05.464795 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.476217 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.476269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.476294 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.476324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.476341 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:05Z","lastTransitionTime":"2026-01-31T16:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.506443 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 06:44:48.324902873 +0000 UTC Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.579452 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.579521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.579538 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.579561 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.579578 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:05Z","lastTransitionTime":"2026-01-31T16:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.681879 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.682227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.682426 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.682581 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.682722 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:05Z","lastTransitionTime":"2026-01-31T16:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.786264 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.786575 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.786722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.786900 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.787045 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:05Z","lastTransitionTime":"2026-01-31T16:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.889592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.889645 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.889661 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.889683 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.889700 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:05Z","lastTransitionTime":"2026-01-31T16:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.992404 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.992451 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.992459 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.992471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:05 crc kubenswrapper[4730]: I0131 16:32:05.992480 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:05Z","lastTransitionTime":"2026-01-31T16:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.095051 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.095125 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.095149 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.095178 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.095200 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:06Z","lastTransitionTime":"2026-01-31T16:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.197838 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.197924 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.197942 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.197964 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.197981 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:06Z","lastTransitionTime":"2026-01-31T16:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.300916 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.300975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.300993 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.301015 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.301032 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:06Z","lastTransitionTime":"2026-01-31T16:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.403664 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.403730 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.403750 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.403772 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.403788 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:06Z","lastTransitionTime":"2026-01-31T16:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.463542 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:06 crc kubenswrapper[4730]: E0131 16:32:06.463865 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.506666 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.506706 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:18:08.143551189 +0000 UTC Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.506743 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.506770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.506798 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.506852 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:06Z","lastTransitionTime":"2026-01-31T16:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.610439 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.610545 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.610601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.610631 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.610654 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:06Z","lastTransitionTime":"2026-01-31T16:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.714625 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.714663 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.714673 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.714688 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.714699 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:06Z","lastTransitionTime":"2026-01-31T16:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.817952 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.818248 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.818423 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.818565 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.818724 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:06Z","lastTransitionTime":"2026-01-31T16:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.922084 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.922135 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.922152 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.922177 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:06 crc kubenswrapper[4730]: I0131 16:32:06.922201 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:06Z","lastTransitionTime":"2026-01-31T16:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.025409 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.025459 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.025478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.025503 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.025521 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:07Z","lastTransitionTime":"2026-01-31T16:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.128044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.128102 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.128119 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.128145 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.128163 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:07Z","lastTransitionTime":"2026-01-31T16:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.231409 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.231461 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.231479 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.231502 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.231518 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:07Z","lastTransitionTime":"2026-01-31T16:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.334290 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.334358 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.334381 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.334409 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.334432 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:07Z","lastTransitionTime":"2026-01-31T16:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.437248 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.437542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.437742 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.437932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.438070 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:07Z","lastTransitionTime":"2026-01-31T16:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.463817 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:07 crc kubenswrapper[4730]: E0131 16:32:07.463942 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.463997 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:07 crc kubenswrapper[4730]: E0131 16:32:07.464156 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.464467 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:07 crc kubenswrapper[4730]: E0131 16:32:07.464854 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.507561 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 12:22:43.856549073 +0000 UTC Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.545332 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.545389 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.545426 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.545456 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.545480 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:07Z","lastTransitionTime":"2026-01-31T16:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.649211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.649250 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.649285 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.649301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.649313 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:07Z","lastTransitionTime":"2026-01-31T16:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.751890 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.751941 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.751958 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.751978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.751996 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:07Z","lastTransitionTime":"2026-01-31T16:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.854961 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.855041 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.855057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.855075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.855086 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:07Z","lastTransitionTime":"2026-01-31T16:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.957857 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.957892 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.957903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.957917 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:07 crc kubenswrapper[4730]: I0131 16:32:07.957929 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:07Z","lastTransitionTime":"2026-01-31T16:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.060652 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.060714 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.060737 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.060761 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.060779 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:08Z","lastTransitionTime":"2026-01-31T16:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.164772 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.164898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.164923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.164953 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.164971 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:08Z","lastTransitionTime":"2026-01-31T16:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.269038 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.269103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.269122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.269148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.269166 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:08Z","lastTransitionTime":"2026-01-31T16:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.372478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.372537 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.372553 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.372577 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.372595 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:08Z","lastTransitionTime":"2026-01-31T16:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.464404 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:08 crc kubenswrapper[4730]: E0131 16:32:08.464597 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.475920 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.476124 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.476304 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.476453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.476608 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:08Z","lastTransitionTime":"2026-01-31T16:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.508330 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:34:42.065064767 +0000 UTC Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.580873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.580980 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.580999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.581057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.581079 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:08Z","lastTransitionTime":"2026-01-31T16:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.687300 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.687359 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.687376 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.687400 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.687417 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:08Z","lastTransitionTime":"2026-01-31T16:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.790340 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.790402 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.790419 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.790444 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.790463 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:08Z","lastTransitionTime":"2026-01-31T16:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.894230 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.894304 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.894323 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.894349 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.894369 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:08Z","lastTransitionTime":"2026-01-31T16:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.996899 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.996950 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.996966 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.996987 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:08 crc kubenswrapper[4730]: I0131 16:32:08.997002 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:08Z","lastTransitionTime":"2026-01-31T16:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.100363 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.100437 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.100464 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.100496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.100520 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:09Z","lastTransitionTime":"2026-01-31T16:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.203043 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.203100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.203119 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.203142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.203163 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:09Z","lastTransitionTime":"2026-01-31T16:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.306156 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.306218 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.306237 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.306261 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.306279 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:09Z","lastTransitionTime":"2026-01-31T16:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.409447 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.409516 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.409542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.409574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.409597 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:09Z","lastTransitionTime":"2026-01-31T16:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.464081 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.464191 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:09 crc kubenswrapper[4730]: E0131 16:32:09.464261 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:32:09 crc kubenswrapper[4730]: E0131 16:32:09.464399 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.464105 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:09 crc kubenswrapper[4730]: E0131 16:32:09.464562 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.508727 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 14:43:09.397413509 +0000 UTC Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.512438 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.512493 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.512511 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.512535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.512554 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:09Z","lastTransitionTime":"2026-01-31T16:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.616108 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.616159 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.616170 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.616187 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.616198 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:09Z","lastTransitionTime":"2026-01-31T16:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.719057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.719110 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.719127 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.719148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.719165 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:09Z","lastTransitionTime":"2026-01-31T16:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.821322 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.821386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.821405 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.821428 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.821446 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:09Z","lastTransitionTime":"2026-01-31T16:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.924016 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.924068 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.924079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.924096 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:09 crc kubenswrapper[4730]: I0131 16:32:09.924108 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:09Z","lastTransitionTime":"2026-01-31T16:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.027701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.027769 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.027791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.027851 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.027880 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:10Z","lastTransitionTime":"2026-01-31T16:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.130923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.131044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.131070 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.131100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.131121 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:10Z","lastTransitionTime":"2026-01-31T16:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.234699 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.234763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.234780 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.234833 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.234851 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:10Z","lastTransitionTime":"2026-01-31T16:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.297378 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c8lpn_2d1c5cbc-307d-4556-b162-2c5c0103662d/kube-multus/1.log" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.298194 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c8lpn_2d1c5cbc-307d-4556-b162-2c5c0103662d/kube-multus/0.log" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.298255 4730 generic.go:334] "Generic (PLEG): container finished" podID="2d1c5cbc-307d-4556-b162-2c5c0103662d" containerID="628a414aa58b365a660f8745dbacd5fa0ecb2f761e87cb4f6bf2c1b57cfef0f0" exitCode=1 Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.298308 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c8lpn" event={"ID":"2d1c5cbc-307d-4556-b162-2c5c0103662d","Type":"ContainerDied","Data":"628a414aa58b365a660f8745dbacd5fa0ecb2f761e87cb4f6bf2c1b57cfef0f0"} Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.298352 4730 scope.go:117] "RemoveContainer" containerID="b175838207241f698cdb63d70a6434f5691cf9a04306d82914f2160a42f4466a" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.299284 4730 scope.go:117] "RemoveContainer" containerID="628a414aa58b365a660f8745dbacd5fa0ecb2f761e87cb4f6bf2c1b57cfef0f0" Jan 31 16:32:10 crc kubenswrapper[4730]: E0131 16:32:10.300866 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-c8lpn_openshift-multus(2d1c5cbc-307d-4556-b162-2c5c0103662d)\"" pod="openshift-multus/multus-c8lpn" podUID="2d1c5cbc-307d-4556-b162-2c5c0103662d" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.337643 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.337676 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.337686 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.337701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.337712 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:10Z","lastTransitionTime":"2026-01-31T16:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.442911 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.442969 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.442989 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.443015 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.443036 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:10Z","lastTransitionTime":"2026-01-31T16:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.464311 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.465167 4730 scope.go:117] "RemoveContainer" containerID="8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731" Jan 31 16:32:10 crc kubenswrapper[4730]: E0131 16:32:10.465598 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.509611 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 02:01:28.713600746 +0000 UTC Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.562546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.562578 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.562586 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.562600 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.562608 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:10Z","lastTransitionTime":"2026-01-31T16:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.665287 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.665344 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.665358 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.665374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.665401 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:10Z","lastTransitionTime":"2026-01-31T16:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.767623 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.767653 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.767665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.767681 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.767692 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:10Z","lastTransitionTime":"2026-01-31T16:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.870161 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.870188 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.870198 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.870213 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.870226 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:10Z","lastTransitionTime":"2026-01-31T16:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.971743 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.971776 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.971785 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.971798 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:10 crc kubenswrapper[4730]: I0131 16:32:10.971822 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:10Z","lastTransitionTime":"2026-01-31T16:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.074109 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.074134 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.074143 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.074155 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.074165 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:11Z","lastTransitionTime":"2026-01-31T16:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.176752 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.176790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.176823 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.176838 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.176849 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:11Z","lastTransitionTime":"2026-01-31T16:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.279341 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.279377 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.279386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.279402 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.279411 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:11Z","lastTransitionTime":"2026-01-31T16:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.285828 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sg8lw"] Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.302342 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c8lpn_2d1c5cbc-307d-4556-b162-2c5c0103662d/kube-multus/1.log" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.304034 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovnkube-controller/3.log" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.306217 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:11 crc kubenswrapper[4730]: E0131 16:32:11.306310 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.306436 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerStarted","Data":"e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c"} Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.307222 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.381383 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.381414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.381422 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.381435 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.381444 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:11Z","lastTransitionTime":"2026-01-31T16:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.463539 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.463588 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.463651 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:11 crc kubenswrapper[4730]: E0131 16:32:11.463743 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:32:11 crc kubenswrapper[4730]: E0131 16:32:11.463864 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:32:11 crc kubenswrapper[4730]: E0131 16:32:11.463922 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.483507 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.483535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.483543 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.483555 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.483565 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:11Z","lastTransitionTime":"2026-01-31T16:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.510346 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 18:47:56.931737676 +0000 UTC Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.585249 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.585280 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.585289 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.585301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.585310 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:11Z","lastTransitionTime":"2026-01-31T16:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.687428 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.687458 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.687466 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.687478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.687488 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:11Z","lastTransitionTime":"2026-01-31T16:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.790727 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.790766 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.790777 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.790791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.790815 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:11Z","lastTransitionTime":"2026-01-31T16:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.893922 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.893954 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.893963 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.893981 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.893992 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:11Z","lastTransitionTime":"2026-01-31T16:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.996304 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.996352 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.996370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.996395 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:11 crc kubenswrapper[4730]: I0131 16:32:11.996413 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:11Z","lastTransitionTime":"2026-01-31T16:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.099528 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.099577 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.099594 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.099616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.099632 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:12Z","lastTransitionTime":"2026-01-31T16:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.202406 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.202476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.202497 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.202525 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.202543 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:12Z","lastTransitionTime":"2026-01-31T16:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.305314 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.305388 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.305413 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.305440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.305467 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:12Z","lastTransitionTime":"2026-01-31T16:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.408205 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.408254 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.408271 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.408295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.408312 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:12Z","lastTransitionTime":"2026-01-31T16:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.510464 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 14:40:34.964151217 +0000 UTC Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.511969 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.512105 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.512255 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.512300 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.512322 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:12Z","lastTransitionTime":"2026-01-31T16:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.614891 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.614943 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.614960 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.614982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.615000 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:12Z","lastTransitionTime":"2026-01-31T16:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.717367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.717452 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.717473 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.717496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.717515 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:12Z","lastTransitionTime":"2026-01-31T16:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.820767 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.820891 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.820913 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.820944 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.820966 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:12Z","lastTransitionTime":"2026-01-31T16:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.923863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.923922 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.923938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.923960 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:12 crc kubenswrapper[4730]: I0131 16:32:12.923977 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:12Z","lastTransitionTime":"2026-01-31T16:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.026431 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.026464 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.026475 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.026490 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.026502 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:13Z","lastTransitionTime":"2026-01-31T16:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.128352 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.128386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.128396 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.128409 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.128420 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:13Z","lastTransitionTime":"2026-01-31T16:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.230895 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.230944 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.230961 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.230982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.231002 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:13Z","lastTransitionTime":"2026-01-31T16:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.334302 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.334361 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.334380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.334405 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.334423 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:13Z","lastTransitionTime":"2026-01-31T16:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.385320 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.385373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.385390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.385411 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.385429 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T16:32:13Z","lastTransitionTime":"2026-01-31T16:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.450657 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podStartSLOduration=98.450587251 podStartE2EDuration="1m38.450587251s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:11.348579535 +0000 UTC m=+118.154636451" watchObservedRunningTime="2026-01-31 16:32:13.450587251 +0000 UTC m=+120.256644207" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.453743 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm"] Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.454639 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.457394 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.457544 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.460186 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.460230 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.463212 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.463270 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.463281 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:13 crc kubenswrapper[4730]: E0131 16:32:13.463382 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.463403 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:13 crc kubenswrapper[4730]: E0131 16:32:13.463554 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:32:13 crc kubenswrapper[4730]: E0131 16:32:13.463650 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:32:13 crc kubenswrapper[4730]: E0131 16:32:13.463732 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.511309 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 09:26:08.574136556 +0000 UTC Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.511359 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 31 16:32:13 crc kubenswrapper[4730]: I0131 16:32:13.519966 4730 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 16:32:14 crc kubenswrapper[4730]: E0131 16:32:14.587748 4730 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.587898 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ad0a9f52-2dcf-4568-9968-367bc9b87aa8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2xwqm\" (UID: \"ad0a9f52-2dcf-4568-9968-367bc9b87aa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.587947 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad0a9f52-2dcf-4568-9968-367bc9b87aa8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2xwqm\" (UID: \"ad0a9f52-2dcf-4568-9968-367bc9b87aa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.587972 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ad0a9f52-2dcf-4568-9968-367bc9b87aa8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2xwqm\" (UID: \"ad0a9f52-2dcf-4568-9968-367bc9b87aa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.588009 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ad0a9f52-2dcf-4568-9968-367bc9b87aa8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2xwqm\" (UID: \"ad0a9f52-2dcf-4568-9968-367bc9b87aa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.588058 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad0a9f52-2dcf-4568-9968-367bc9b87aa8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2xwqm\" (UID: \"ad0a9f52-2dcf-4568-9968-367bc9b87aa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.689185 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad0a9f52-2dcf-4568-9968-367bc9b87aa8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2xwqm\" (UID: \"ad0a9f52-2dcf-4568-9968-367bc9b87aa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.689447 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ad0a9f52-2dcf-4568-9968-367bc9b87aa8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2xwqm\" (UID: \"ad0a9f52-2dcf-4568-9968-367bc9b87aa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.689471 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad0a9f52-2dcf-4568-9968-367bc9b87aa8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2xwqm\" (UID: \"ad0a9f52-2dcf-4568-9968-367bc9b87aa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.689497 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ad0a9f52-2dcf-4568-9968-367bc9b87aa8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2xwqm\" (UID: \"ad0a9f52-2dcf-4568-9968-367bc9b87aa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.689518 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ad0a9f52-2dcf-4568-9968-367bc9b87aa8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2xwqm\" (UID: \"ad0a9f52-2dcf-4568-9968-367bc9b87aa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.689574 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ad0a9f52-2dcf-4568-9968-367bc9b87aa8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2xwqm\" (UID: \"ad0a9f52-2dcf-4568-9968-367bc9b87aa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.689865 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ad0a9f52-2dcf-4568-9968-367bc9b87aa8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2xwqm\" (UID: \"ad0a9f52-2dcf-4568-9968-367bc9b87aa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.691523 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.691673 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.701836 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ad0a9f52-2dcf-4568-9968-367bc9b87aa8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2xwqm\" (UID: \"ad0a9f52-2dcf-4568-9968-367bc9b87aa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.706288 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.707286 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad0a9f52-2dcf-4568-9968-367bc9b87aa8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2xwqm\" (UID: \"ad0a9f52-2dcf-4568-9968-367bc9b87aa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.718680 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad0a9f52-2dcf-4568-9968-367bc9b87aa8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2xwqm\" (UID: \"ad0a9f52-2dcf-4568-9968-367bc9b87aa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.981164 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 16:32:14 crc kubenswrapper[4730]: I0131 16:32:14.989123 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" Jan 31 16:32:15 crc kubenswrapper[4730]: I0131 16:32:15.463135 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:15 crc kubenswrapper[4730]: I0131 16:32:15.463142 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:15 crc kubenswrapper[4730]: I0131 16:32:15.463295 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:15 crc kubenswrapper[4730]: E0131 16:32:15.463264 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:32:15 crc kubenswrapper[4730]: E0131 16:32:15.463350 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:32:15 crc kubenswrapper[4730]: E0131 16:32:15.463510 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:32:15 crc kubenswrapper[4730]: I0131 16:32:15.463975 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:15 crc kubenswrapper[4730]: E0131 16:32:15.464163 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:32:15 crc kubenswrapper[4730]: I0131 16:32:15.610934 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" event={"ID":"ad0a9f52-2dcf-4568-9968-367bc9b87aa8","Type":"ContainerStarted","Data":"e7f6e0970faba66a95cf71e9250c53d9a5505883f62110e329b1f4015e401e0c"} Jan 31 16:32:15 crc kubenswrapper[4730]: I0131 16:32:15.611006 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" event={"ID":"ad0a9f52-2dcf-4568-9968-367bc9b87aa8","Type":"ContainerStarted","Data":"01ec40e7b4a34f35e8e0fee9a2b1157684f6965ebfcf32aa52dbe7c0becebefc"} Jan 31 16:32:15 crc kubenswrapper[4730]: I0131 16:32:15.636005 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2xwqm" podStartSLOduration=100.635981161 podStartE2EDuration="1m40.635981161s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:15.629702624 +0000 UTC m=+122.435759600" watchObservedRunningTime="2026-01-31 16:32:15.635981161 +0000 UTC m=+122.442038107" Jan 31 16:32:15 crc kubenswrapper[4730]: I0131 16:32:15.745002 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:32:17 crc kubenswrapper[4730]: I0131 16:32:17.463072 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:17 crc kubenswrapper[4730]: I0131 16:32:17.463138 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:17 crc kubenswrapper[4730]: E0131 16:32:17.463197 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:32:17 crc kubenswrapper[4730]: I0131 16:32:17.463228 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:17 crc kubenswrapper[4730]: E0131 16:32:17.463372 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:32:17 crc kubenswrapper[4730]: E0131 16:32:17.463456 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:32:17 crc kubenswrapper[4730]: I0131 16:32:17.463864 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:17 crc kubenswrapper[4730]: E0131 16:32:17.464049 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:32:19 crc kubenswrapper[4730]: I0131 16:32:19.463374 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:19 crc kubenswrapper[4730]: I0131 16:32:19.463471 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:19 crc kubenswrapper[4730]: I0131 16:32:19.463523 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:19 crc kubenswrapper[4730]: I0131 16:32:19.463640 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:19 crc kubenswrapper[4730]: E0131 16:32:19.463641 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:32:19 crc kubenswrapper[4730]: E0131 16:32:19.463996 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:32:19 crc kubenswrapper[4730]: E0131 16:32:19.463952 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:32:19 crc kubenswrapper[4730]: E0131 16:32:19.464057 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:32:19 crc kubenswrapper[4730]: E0131 16:32:19.603626 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 16:32:21 crc kubenswrapper[4730]: I0131 16:32:21.463168 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:21 crc kubenswrapper[4730]: I0131 16:32:21.463448 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:21 crc kubenswrapper[4730]: E0131 16:32:21.463502 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:32:21 crc kubenswrapper[4730]: I0131 16:32:21.463232 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:21 crc kubenswrapper[4730]: I0131 16:32:21.463309 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:21 crc kubenswrapper[4730]: E0131 16:32:21.463698 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:32:21 crc kubenswrapper[4730]: E0131 16:32:21.463818 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:32:21 crc kubenswrapper[4730]: E0131 16:32:21.463857 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:32:23 crc kubenswrapper[4730]: I0131 16:32:23.463422 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:23 crc kubenswrapper[4730]: I0131 16:32:23.463468 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:23 crc kubenswrapper[4730]: I0131 16:32:23.463504 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:23 crc kubenswrapper[4730]: I0131 16:32:23.463489 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:23 crc kubenswrapper[4730]: E0131 16:32:23.463603 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:32:23 crc kubenswrapper[4730]: E0131 16:32:23.463754 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:32:23 crc kubenswrapper[4730]: E0131 16:32:23.463935 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:32:23 crc kubenswrapper[4730]: E0131 16:32:23.464073 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:32:24 crc kubenswrapper[4730]: E0131 16:32:24.605292 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 16:32:25 crc kubenswrapper[4730]: I0131 16:32:25.463737 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:25 crc kubenswrapper[4730]: I0131 16:32:25.463748 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:25 crc kubenswrapper[4730]: E0131 16:32:25.463976 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:32:25 crc kubenswrapper[4730]: I0131 16:32:25.464190 4730 scope.go:117] "RemoveContainer" containerID="628a414aa58b365a660f8745dbacd5fa0ecb2f761e87cb4f6bf2c1b57cfef0f0" Jan 31 16:32:25 crc kubenswrapper[4730]: I0131 16:32:25.464406 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:25 crc kubenswrapper[4730]: I0131 16:32:25.464417 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:25 crc kubenswrapper[4730]: E0131 16:32:25.464539 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:32:25 crc kubenswrapper[4730]: E0131 16:32:25.464713 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:32:25 crc kubenswrapper[4730]: E0131 16:32:25.464837 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:32:25 crc kubenswrapper[4730]: I0131 16:32:25.668354 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c8lpn_2d1c5cbc-307d-4556-b162-2c5c0103662d/kube-multus/1.log" Jan 31 16:32:25 crc kubenswrapper[4730]: I0131 16:32:25.668413 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c8lpn" event={"ID":"2d1c5cbc-307d-4556-b162-2c5c0103662d","Type":"ContainerStarted","Data":"45cc2c43568992c508493fd3172eb9663d13fb70f0aeb76f87274df206079158"} Jan 31 16:32:27 crc kubenswrapper[4730]: I0131 16:32:27.464000 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:27 crc kubenswrapper[4730]: I0131 16:32:27.464036 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:27 crc kubenswrapper[4730]: I0131 16:32:27.464079 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:27 crc kubenswrapper[4730]: I0131 16:32:27.464113 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:27 crc kubenswrapper[4730]: E0131 16:32:27.464183 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:32:27 crc kubenswrapper[4730]: E0131 16:32:27.464297 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:32:27 crc kubenswrapper[4730]: E0131 16:32:27.464425 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:32:27 crc kubenswrapper[4730]: E0131 16:32:27.464633 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:32:29 crc kubenswrapper[4730]: I0131 16:32:29.464153 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:29 crc kubenswrapper[4730]: I0131 16:32:29.464231 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:29 crc kubenswrapper[4730]: I0131 16:32:29.464263 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:29 crc kubenswrapper[4730]: E0131 16:32:29.464341 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sg8lw" podUID="39ef74a4-f27d-498b-8bbd-aae64590d030" Jan 31 16:32:29 crc kubenswrapper[4730]: E0131 16:32:29.464476 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 16:32:29 crc kubenswrapper[4730]: E0131 16:32:29.464677 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 16:32:29 crc kubenswrapper[4730]: I0131 16:32:29.464949 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:29 crc kubenswrapper[4730]: E0131 16:32:29.465150 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 16:32:31 crc kubenswrapper[4730]: I0131 16:32:31.463479 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:31 crc kubenswrapper[4730]: I0131 16:32:31.463512 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:31 crc kubenswrapper[4730]: I0131 16:32:31.463532 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:31 crc kubenswrapper[4730]: I0131 16:32:31.463571 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:31 crc kubenswrapper[4730]: I0131 16:32:31.467751 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 16:32:31 crc kubenswrapper[4730]: I0131 16:32:31.467921 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 16:32:31 crc kubenswrapper[4730]: I0131 16:32:31.468466 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 16:32:31 crc kubenswrapper[4730]: I0131 16:32:31.468066 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 16:32:31 crc kubenswrapper[4730]: I0131 16:32:31.468783 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 16:32:31 crc kubenswrapper[4730]: I0131 16:32:31.472295 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.873433 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.924514 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjcxz"] Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.925040 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjcxz" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.928621 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.929085 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w2n4l"] Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.929484 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.929544 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.931183 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m"] Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.931902 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.933256 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.933844 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.934198 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.936283 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-frj85"] Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.937029 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.943313 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc"] Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.944339 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.964370 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.964463 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.964651 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.965353 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.965880 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.966019 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.966329 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.966451 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.966614 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.966778 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.966931 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.967089 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.967295 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.967464 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.967641 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.967776 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.967946 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.968077 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.968215 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls"] Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.968680 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.968972 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4cfvt"] Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.969689 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.969842 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.969941 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.969890 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.969958 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.970001 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.974740 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.974927 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.975071 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.975202 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.976988 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9pj8k"] Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.977546 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9pj8k" Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.981466 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vk49s"] Jan 31 16:32:33 crc kubenswrapper[4730]: I0131 16:32:33.982059 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.009589 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5kjkn"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.010498 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.012291 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.012438 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.012441 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.012544 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.012739 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.012848 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.012898 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.013022 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.013244 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.013390 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.013445 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.013881 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.013983 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.014005 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.014018 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.014088 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.014164 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.014176 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.014255 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.014325 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.014384 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.014337 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.014485 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.014510 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.033167 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.033790 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.033872 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.034003 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.034016 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.034203 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-28kdr"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.034750 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.034836 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-28kdr" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.035106 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.035577 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.036452 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.036662 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjcxz"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.037656 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-6v2xk"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.037953 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.038862 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.038928 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.038966 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.041188 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.041351 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.041372 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.041507 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.041698 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.043171 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w2n4l"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.043930 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.044722 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.044868 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.045277 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.046314 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048183 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-client-ca\") pod \"controller-manager-879f6c89f-w2n4l\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048223 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-image-import-ca\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048260 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81d169f8-a558-4b08-a62d-1e4079eb26e3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hjcxz\" (UID: \"81d169f8-a558-4b08-a62d-1e4079eb26e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjcxz" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048285 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b2b6c9a-5a3c-4325-be55-3ba2718191ce-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vk49s\" (UID: \"1b2b6c9a-5a3c-4325-be55-3ba2718191ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048308 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd5w5\" (UniqueName: \"kubernetes.io/projected/47607256-aa97-41f0-9847-fdd1b79766ff-kube-api-access-qd5w5\") pod \"authentication-operator-69f744f599-frj85\" (UID: \"47607256-aa97-41f0-9847-fdd1b79766ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048333 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4b96638-d5c4-43d4-ab38-15972a55d0f4-client-ca\") pod \"route-controller-manager-6576b87f9c-ml2ls\" (UID: \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048354 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-etcd-client\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048376 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2nfr\" (UniqueName: \"kubernetes.io/projected/222cebc4-19ee-44bb-9de4-da091e798019-kube-api-access-s2nfr\") pod \"machine-approver-56656f9798-wzp9m\" (UID: \"222cebc4-19ee-44bb-9de4-da091e798019\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048405 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b2b6c9a-5a3c-4325-be55-3ba2718191ce-config\") pod \"machine-api-operator-5694c8668f-vk49s\" (UID: \"1b2b6c9a-5a3c-4325-be55-3ba2718191ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048424 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-config\") pod \"controller-manager-879f6c89f-w2n4l\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048442 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-etcd-client\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048476 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blnwb\" (UniqueName: \"kubernetes.io/projected/d853c17f-0402-432b-bdee-1c8df9fa0093-kube-api-access-blnwb\") pod \"cluster-samples-operator-665b6dd947-9pj8k\" (UID: \"d853c17f-0402-432b-bdee-1c8df9fa0093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9pj8k" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048500 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/222cebc4-19ee-44bb-9de4-da091e798019-machine-approver-tls\") pod \"machine-approver-56656f9798-wzp9m\" (UID: \"222cebc4-19ee-44bb-9de4-da091e798019\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048522 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048546 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-encryption-config\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048566 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222cebc4-19ee-44bb-9de4-da091e798019-config\") pod \"machine-approver-56656f9798-wzp9m\" (UID: \"222cebc4-19ee-44bb-9de4-da091e798019\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048589 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8zqk\" (UniqueName: \"kubernetes.io/projected/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-kube-api-access-p8zqk\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048611 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-serving-cert\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048630 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048650 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-audit-dir\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048672 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-etcd-serving-ca\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048691 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b2b6c9a-5a3c-4325-be55-3ba2718191ce-images\") pod \"machine-api-operator-5694c8668f-vk49s\" (UID: \"1b2b6c9a-5a3c-4325-be55-3ba2718191ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048710 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81d169f8-a558-4b08-a62d-1e4079eb26e3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hjcxz\" (UID: \"81d169f8-a558-4b08-a62d-1e4079eb26e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjcxz" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048747 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a029edf-d8ad-4314-9296-0f6c4f707330-serving-cert\") pod \"controller-manager-879f6c89f-w2n4l\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048769 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-w2n4l\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048791 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-encryption-config\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048842 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47607256-aa97-41f0-9847-fdd1b79766ff-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-frj85\" (UID: \"47607256-aa97-41f0-9847-fdd1b79766ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048856 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47607256-aa97-41f0-9847-fdd1b79766ff-service-ca-bundle\") pod \"authentication-operator-69f744f599-frj85\" (UID: \"47607256-aa97-41f0-9847-fdd1b79766ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048873 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-audit-policies\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048890 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/222cebc4-19ee-44bb-9de4-da091e798019-auth-proxy-config\") pod \"machine-approver-56656f9798-wzp9m\" (UID: \"222cebc4-19ee-44bb-9de4-da091e798019\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048905 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnvw2\" (UniqueName: \"kubernetes.io/projected/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-kube-api-access-wnvw2\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048920 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d853c17f-0402-432b-bdee-1c8df9fa0093-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9pj8k\" (UID: \"d853c17f-0402-432b-bdee-1c8df9fa0093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9pj8k" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048935 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxflx\" (UniqueName: \"kubernetes.io/projected/9a029edf-d8ad-4314-9296-0f6c4f707330-kube-api-access-rxflx\") pod \"controller-manager-879f6c89f-w2n4l\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048949 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-config\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048963 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-audit-dir\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048980 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b96638-d5c4-43d4-ab38-15972a55d0f4-serving-cert\") pod \"route-controller-manager-6576b87f9c-ml2ls\" (UID: \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.048999 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b96638-d5c4-43d4-ab38-15972a55d0f4-config\") pod \"route-controller-manager-6576b87f9c-ml2ls\" (UID: \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.049013 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47607256-aa97-41f0-9847-fdd1b79766ff-serving-cert\") pod \"authentication-operator-69f744f599-frj85\" (UID: \"47607256-aa97-41f0-9847-fdd1b79766ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.049027 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jz98\" (UniqueName: \"kubernetes.io/projected/81d169f8-a558-4b08-a62d-1e4079eb26e3-kube-api-access-8jz98\") pod \"openshift-apiserver-operator-796bbdcf4f-hjcxz\" (UID: \"81d169f8-a558-4b08-a62d-1e4079eb26e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjcxz" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.049042 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v88xq\" (UniqueName: \"kubernetes.io/projected/1b2b6c9a-5a3c-4325-be55-3ba2718191ce-kube-api-access-v88xq\") pod \"machine-api-operator-5694c8668f-vk49s\" (UID: \"1b2b6c9a-5a3c-4325-be55-3ba2718191ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.049066 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-audit\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.049085 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-serving-cert\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.049101 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-node-pullsecrets\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.049114 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47607256-aa97-41f0-9847-fdd1b79766ff-config\") pod \"authentication-operator-69f744f599-frj85\" (UID: \"47607256-aa97-41f0-9847-fdd1b79766ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.049131 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x75gm\" (UniqueName: \"kubernetes.io/projected/a4b96638-d5c4-43d4-ab38-15972a55d0f4-kube-api-access-x75gm\") pod \"route-controller-manager-6576b87f9c-ml2ls\" (UID: \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.049147 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.049375 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.051344 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.052085 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.055976 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.056202 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.056844 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.058837 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z6ftx"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.059351 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.059586 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2bcp4"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.060058 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2bcp4" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.068493 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.068605 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.069116 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.069399 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.069495 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.069574 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.069649 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.069724 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.081817 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jmpc6"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.069576 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.082627 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.083941 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jmpc6" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.084685 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.084960 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.085142 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.085256 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.087968 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grxdz"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.103119 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.103324 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.103348 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.103485 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.103847 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grxdz" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.104162 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.104264 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.104368 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.104525 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cp5tf"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.104880 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.105465 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qf5fm"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.105758 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.106078 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qf5fm" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.106701 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.107115 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.107137 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.108943 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6tq7"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.109443 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6tq7" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.109899 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-frj85"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.113283 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.113321 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.113921 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.117773 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wql8z"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.118411 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wql8z" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.126755 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5xfm"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.127420 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ntvr6"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.127927 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ntvr6" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.128233 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5xfm" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.134595 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.134748 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bzzjv"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.135323 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.135688 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.136206 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.136440 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bzzjv" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.136578 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.136889 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.137988 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-28kdr"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.143041 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fl66m"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.143617 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.143907 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl66m" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.143986 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.143911 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-txbq6"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.144623 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.149727 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blnwb\" (UniqueName: \"kubernetes.io/projected/d853c17f-0402-432b-bdee-1c8df9fa0093-kube-api-access-blnwb\") pod \"cluster-samples-operator-665b6dd947-9pj8k\" (UID: \"d853c17f-0402-432b-bdee-1c8df9fa0093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9pj8k" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.149858 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.149965 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/222cebc4-19ee-44bb-9de4-da091e798019-machine-approver-tls\") pod \"machine-approver-56656f9798-wzp9m\" (UID: \"222cebc4-19ee-44bb-9de4-da091e798019\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.150043 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-encryption-config\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.150200 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222cebc4-19ee-44bb-9de4-da091e798019-config\") pod \"machine-approver-56656f9798-wzp9m\" (UID: \"222cebc4-19ee-44bb-9de4-da091e798019\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.150314 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8zqk\" (UniqueName: \"kubernetes.io/projected/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-kube-api-access-p8zqk\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.150408 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.150480 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-audit-dir\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.150547 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-etcd-serving-ca\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.150610 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b2b6c9a-5a3c-4325-be55-3ba2718191ce-images\") pod \"machine-api-operator-5694c8668f-vk49s\" (UID: \"1b2b6c9a-5a3c-4325-be55-3ba2718191ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.150672 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-serving-cert\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.150755 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a029edf-d8ad-4314-9296-0f6c4f707330-serving-cert\") pod \"controller-manager-879f6c89f-w2n4l\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.150847 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81d169f8-a558-4b08-a62d-1e4079eb26e3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hjcxz\" (UID: \"81d169f8-a558-4b08-a62d-1e4079eb26e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjcxz" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.150922 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-w2n4l\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.152457 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-encryption-config\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.152554 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47607256-aa97-41f0-9847-fdd1b79766ff-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-frj85\" (UID: \"47607256-aa97-41f0-9847-fdd1b79766ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.152634 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47607256-aa97-41f0-9847-fdd1b79766ff-service-ca-bundle\") pod \"authentication-operator-69f744f599-frj85\" (UID: \"47607256-aa97-41f0-9847-fdd1b79766ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.152716 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-audit-policies\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.152976 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/222cebc4-19ee-44bb-9de4-da091e798019-auth-proxy-config\") pod \"machine-approver-56656f9798-wzp9m\" (UID: \"222cebc4-19ee-44bb-9de4-da091e798019\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.153056 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d853c17f-0402-432b-bdee-1c8df9fa0093-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9pj8k\" (UID: \"d853c17f-0402-432b-bdee-1c8df9fa0093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9pj8k" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.153128 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxflx\" (UniqueName: \"kubernetes.io/projected/9a029edf-d8ad-4314-9296-0f6c4f707330-kube-api-access-rxflx\") pod \"controller-manager-879f6c89f-w2n4l\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.153192 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-config\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.153260 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-audit-dir\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.153326 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnvw2\" (UniqueName: \"kubernetes.io/projected/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-kube-api-access-wnvw2\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.153401 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b96638-d5c4-43d4-ab38-15972a55d0f4-serving-cert\") pod \"route-controller-manager-6576b87f9c-ml2ls\" (UID: \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.153475 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b96638-d5c4-43d4-ab38-15972a55d0f4-config\") pod \"route-controller-manager-6576b87f9c-ml2ls\" (UID: \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.153561 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47607256-aa97-41f0-9847-fdd1b79766ff-serving-cert\") pod \"authentication-operator-69f744f599-frj85\" (UID: \"47607256-aa97-41f0-9847-fdd1b79766ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.153628 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v88xq\" (UniqueName: \"kubernetes.io/projected/1b2b6c9a-5a3c-4325-be55-3ba2718191ce-kube-api-access-v88xq\") pod \"machine-api-operator-5694c8668f-vk49s\" (UID: \"1b2b6c9a-5a3c-4325-be55-3ba2718191ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.153692 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jz98\" (UniqueName: \"kubernetes.io/projected/81d169f8-a558-4b08-a62d-1e4079eb26e3-kube-api-access-8jz98\") pod \"openshift-apiserver-operator-796bbdcf4f-hjcxz\" (UID: \"81d169f8-a558-4b08-a62d-1e4079eb26e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjcxz" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.153766 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-audit\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.153862 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-serving-cert\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.153929 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-node-pullsecrets\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.153999 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47607256-aa97-41f0-9847-fdd1b79766ff-config\") pod \"authentication-operator-69f744f599-frj85\" (UID: \"47607256-aa97-41f0-9847-fdd1b79766ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.154071 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x75gm\" (UniqueName: \"kubernetes.io/projected/a4b96638-d5c4-43d4-ab38-15972a55d0f4-kube-api-access-x75gm\") pod \"route-controller-manager-6576b87f9c-ml2ls\" (UID: \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.154142 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.154210 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-client-ca\") pod \"controller-manager-879f6c89f-w2n4l\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.154330 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-image-import-ca\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.154408 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81d169f8-a558-4b08-a62d-1e4079eb26e3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hjcxz\" (UID: \"81d169f8-a558-4b08-a62d-1e4079eb26e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjcxz" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.154476 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b2b6c9a-5a3c-4325-be55-3ba2718191ce-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vk49s\" (UID: \"1b2b6c9a-5a3c-4325-be55-3ba2718191ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.154555 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4b96638-d5c4-43d4-ab38-15972a55d0f4-client-ca\") pod \"route-controller-manager-6576b87f9c-ml2ls\" (UID: \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.154624 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-etcd-client\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.154690 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd5w5\" (UniqueName: \"kubernetes.io/projected/47607256-aa97-41f0-9847-fdd1b79766ff-kube-api-access-qd5w5\") pod \"authentication-operator-69f744f599-frj85\" (UID: \"47607256-aa97-41f0-9847-fdd1b79766ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.162147 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2nfr\" (UniqueName: \"kubernetes.io/projected/222cebc4-19ee-44bb-9de4-da091e798019-kube-api-access-s2nfr\") pod \"machine-approver-56656f9798-wzp9m\" (UID: \"222cebc4-19ee-44bb-9de4-da091e798019\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.162859 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b2b6c9a-5a3c-4325-be55-3ba2718191ce-config\") pod \"machine-api-operator-5694c8668f-vk49s\" (UID: \"1b2b6c9a-5a3c-4325-be55-3ba2718191ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.165449 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-etcd-client\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.165624 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-config\") pod \"controller-manager-879f6c89f-w2n4l\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.165672 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81d169f8-a558-4b08-a62d-1e4079eb26e3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hjcxz\" (UID: \"81d169f8-a558-4b08-a62d-1e4079eb26e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjcxz" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.151867 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222cebc4-19ee-44bb-9de4-da091e798019-config\") pod \"machine-approver-56656f9798-wzp9m\" (UID: \"222cebc4-19ee-44bb-9de4-da091e798019\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.151423 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.161829 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47607256-aa97-41f0-9847-fdd1b79766ff-service-ca-bundle\") pod \"authentication-operator-69f744f599-frj85\" (UID: \"47607256-aa97-41f0-9847-fdd1b79766ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.151673 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b2b6c9a-5a3c-4325-be55-3ba2718191ce-images\") pod \"machine-api-operator-5694c8668f-vk49s\" (UID: \"1b2b6c9a-5a3c-4325-be55-3ba2718191ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.161905 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/222cebc4-19ee-44bb-9de4-da091e798019-auth-proxy-config\") pod \"machine-approver-56656f9798-wzp9m\" (UID: \"222cebc4-19ee-44bb-9de4-da091e798019\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.152196 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-etcd-serving-ca\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.161958 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b96638-d5c4-43d4-ab38-15972a55d0f4-serving-cert\") pod \"route-controller-manager-6576b87f9c-ml2ls\" (UID: \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.162294 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-audit-policies\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.162824 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81d169f8-a558-4b08-a62d-1e4079eb26e3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hjcxz\" (UID: \"81d169f8-a558-4b08-a62d-1e4079eb26e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjcxz" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.163223 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b96638-d5c4-43d4-ab38-15972a55d0f4-config\") pod \"route-controller-manager-6576b87f9c-ml2ls\" (UID: \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.163264 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-audit-dir\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.163368 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a029edf-d8ad-4314-9296-0f6c4f707330-serving-cert\") pod \"controller-manager-879f6c89f-w2n4l\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.163550 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b2b6c9a-5a3c-4325-be55-3ba2718191ce-config\") pod \"machine-api-operator-5694c8668f-vk49s\" (UID: \"1b2b6c9a-5a3c-4325-be55-3ba2718191ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.157617 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.163690 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.163725 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-node-pullsecrets\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.164193 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47607256-aa97-41f0-9847-fdd1b79766ff-config\") pod \"authentication-operator-69f744f599-frj85\" (UID: \"47607256-aa97-41f0-9847-fdd1b79766ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.164390 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-config\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.157298 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-encryption-config\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.164639 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-audit\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.165131 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4b96638-d5c4-43d4-ab38-15972a55d0f4-client-ca\") pod \"route-controller-manager-6576b87f9c-ml2ls\" (UID: \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.151701 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-audit-dir\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.150927 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.166453 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-jwc2k"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.166815 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ks8gz"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.166900 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-client-ca\") pod \"controller-manager-879f6c89f-w2n4l\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.159135 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-w2n4l\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.167219 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.167317 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.167399 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d853c17f-0402-432b-bdee-1c8df9fa0093-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9pj8k\" (UID: \"d853c17f-0402-432b-bdee-1c8df9fa0093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9pj8k" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.167509 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.167533 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.167604 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.167653 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ks8gz" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.167795 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-config\") pod \"controller-manager-879f6c89f-w2n4l\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.168340 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-j5kgc"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.168747 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-j5kgc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.169404 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47607256-aa97-41f0-9847-fdd1b79766ff-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-frj85\" (UID: \"47607256-aa97-41f0-9847-fdd1b79766ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.172475 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jmpc6"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.172499 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc466"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.172785 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5kjkn"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.172852 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc466" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.173370 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/222cebc4-19ee-44bb-9de4-da091e798019-machine-approver-tls\") pod \"machine-approver-56656f9798-wzp9m\" (UID: \"222cebc4-19ee-44bb-9de4-da091e798019\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.173795 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47607256-aa97-41f0-9847-fdd1b79766ff-serving-cert\") pod \"authentication-operator-69f744f599-frj85\" (UID: \"47607256-aa97-41f0-9847-fdd1b79766ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.174237 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-encryption-config\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.174365 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-image-import-ca\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.174458 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vk49s"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.175408 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6tq7"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.183603 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.184048 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-etcd-client\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.184319 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b2b6c9a-5a3c-4325-be55-3ba2718191ce-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vk49s\" (UID: \"1b2b6c9a-5a3c-4325-be55-3ba2718191ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.184417 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-serving-cert\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.189130 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9pj8k"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.190346 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qf5fm"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.193315 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-serving-cert\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.193547 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.207456 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-etcd-client\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.207590 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.210104 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.216883 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bzzjv"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.218087 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5vcn4"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.219030 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4cfvt"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.219124 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5vcn4" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.221467 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cp5tf"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.224269 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z6ftx"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.227103 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2bcp4"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.227289 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.229297 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5xfm"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.235426 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.235612 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.235672 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6v2xk"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.238707 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.240182 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-txbq6"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.243335 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.245006 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grxdz"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.245819 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gj2x5"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.246912 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.247566 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.248618 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-wc5vj"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.249235 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wc5vj" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.251686 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.253126 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ntvr6"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.254760 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5vcn4"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.257844 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.259836 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc466"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.261527 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wql8z"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.263789 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.265312 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fl66m"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.267075 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.267374 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ks8gz"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.268853 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-j5kgc"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.270181 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wc5vj"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.271358 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.272782 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gj2x5"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.274062 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-nxpmk"] Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.274702 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nxpmk" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.287441 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.307056 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.327062 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.347176 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.376364 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.386773 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.406747 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.427916 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.447211 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.467660 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.487742 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.507863 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.527713 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.547626 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.567940 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.587952 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.607607 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.628289 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.647381 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.676768 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.688285 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.708745 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.728080 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.748337 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.768035 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.787872 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.808245 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.828227 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.848670 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.868741 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.888617 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.908311 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.927715 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.948690 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.967892 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 16:32:34 crc kubenswrapper[4730]: I0131 16:32:34.988592 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.008234 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.028442 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.047863 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.068376 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.088531 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.108188 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.127978 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.146452 4730 request.go:700] Waited for 1.002109965s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&limit=500&resourceVersion=0 Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.148729 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.168560 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.188051 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.209551 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.227954 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.248342 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.280508 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.287761 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.362434 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8zqk\" (UniqueName: \"kubernetes.io/projected/5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3-kube-api-access-p8zqk\") pod \"apiserver-76f77b778f-4cfvt\" (UID: \"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3\") " pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.374639 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blnwb\" (UniqueName: \"kubernetes.io/projected/d853c17f-0402-432b-bdee-1c8df9fa0093-kube-api-access-blnwb\") pod \"cluster-samples-operator-665b6dd947-9pj8k\" (UID: \"d853c17f-0402-432b-bdee-1c8df9fa0093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9pj8k" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.401167 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnvw2\" (UniqueName: \"kubernetes.io/projected/d4524a04-3cf1-48b4-9af1-ca47b1edf9e5-kube-api-access-wnvw2\") pod \"apiserver-7bbb656c7d-tj4cc\" (UID: \"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.416740 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd5w5\" (UniqueName: \"kubernetes.io/projected/47607256-aa97-41f0-9847-fdd1b79766ff-kube-api-access-qd5w5\") pod \"authentication-operator-69f744f599-frj85\" (UID: \"47607256-aa97-41f0-9847-fdd1b79766ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.437186 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxflx\" (UniqueName: \"kubernetes.io/projected/9a029edf-d8ad-4314-9296-0f6c4f707330-kube-api-access-rxflx\") pod \"controller-manager-879f6c89f-w2n4l\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.464911 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jz98\" (UniqueName: \"kubernetes.io/projected/81d169f8-a558-4b08-a62d-1e4079eb26e3-kube-api-access-8jz98\") pod \"openshift-apiserver-operator-796bbdcf4f-hjcxz\" (UID: \"81d169f8-a558-4b08-a62d-1e4079eb26e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjcxz" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.472733 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v88xq\" (UniqueName: \"kubernetes.io/projected/1b2b6c9a-5a3c-4325-be55-3ba2718191ce-kube-api-access-v88xq\") pod \"machine-api-operator-5694c8668f-vk49s\" (UID: \"1b2b6c9a-5a3c-4325-be55-3ba2718191ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.479396 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.493550 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x75gm\" (UniqueName: \"kubernetes.io/projected/a4b96638-d5c4-43d4-ab38-15972a55d0f4-kube-api-access-x75gm\") pod \"route-controller-manager-6576b87f9c-ml2ls\" (UID: \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.506198 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2nfr\" (UniqueName: \"kubernetes.io/projected/222cebc4-19ee-44bb-9de4-da091e798019-kube-api-access-s2nfr\") pod \"machine-approver-56656f9798-wzp9m\" (UID: \"222cebc4-19ee-44bb-9de4-da091e798019\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.508407 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.528141 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.547355 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.553334 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.561084 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.570319 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.583458 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.591989 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.608130 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.610697 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.653058 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.653076 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9pj8k" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.654959 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.655380 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.668369 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.690657 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.710314 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.730082 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.749405 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.759970 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjcxz" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.767662 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.789525 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.795772 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.807119 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.814744 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w2n4l"] Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.831651 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 16:32:35 crc kubenswrapper[4730]: W0131 16:32:35.832508 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a029edf_d8ad_4314_9296_0f6c4f707330.slice/crio-cc56148f748c708e85e58803d1853d289e77c3d11a7271a7683324ee79749c40 WatchSource:0}: Error finding container cc56148f748c708e85e58803d1853d289e77c3d11a7271a7683324ee79749c40: Status 404 returned error can't find the container with id cc56148f748c708e85e58803d1853d289e77c3d11a7271a7683324ee79749c40 Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.846847 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.867264 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.887885 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.907853 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.927186 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.968443 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.969836 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vk49s"] Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.987010 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.989045 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-audit-dir\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.989107 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d637d59-da07-4756-8234-e17cba93e1b0-serving-cert\") pod \"console-operator-58897d9998-28kdr\" (UID: \"0d637d59-da07-4756-8234-e17cba93e1b0\") " pod="openshift-console-operator/console-operator-58897d9998-28kdr" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.989169 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.989204 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ce586ff-70fb-4890-9044-5693734e5d8e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-grxdz\" (UID: \"5ce586ff-70fb-4890-9044-5693734e5d8e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grxdz" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.989242 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d637d59-da07-4756-8234-e17cba93e1b0-trusted-ca\") pod \"console-operator-58897d9998-28kdr\" (UID: \"0d637d59-da07-4756-8234-e17cba93e1b0\") " pod="openshift-console-operator/console-operator-58897d9998-28kdr" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.989270 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.989361 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d96093ab-8af5-4e3c-b89e-601cd9581b80-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rdbnh\" (UID: \"d96093ab-8af5-4e3c-b89e-601cd9581b80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.989399 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.989421 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce586ff-70fb-4890-9044-5693734e5d8e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-grxdz\" (UID: \"5ce586ff-70fb-4890-9044-5693734e5d8e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grxdz" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.989460 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-oauth-serving-cert\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.989482 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpcb2\" (UniqueName: \"kubernetes.io/projected/f2b47509-6f1d-40c5-94d7-10aa37fa5dce-kube-api-access-xpcb2\") pod \"openshift-config-operator-7777fb866f-ggbf6\" (UID: \"f2b47509-6f1d-40c5-94d7-10aa37fa5dce\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.989505 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-serving-cert\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.989530 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d96093ab-8af5-4e3c-b89e-601cd9581b80-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rdbnh\" (UID: \"d96093ab-8af5-4e3c-b89e-601cd9581b80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.989565 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.989590 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d504518-949c-45ca-8fc7-2f7e1d00f611-registry-certificates\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.989613 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990036 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-registry-tls\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990085 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-trusted-ca-bundle\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990107 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dd2f6d82-d306-46dc-a938-2394b017b906-metrics-tls\") pod \"dns-operator-744455d44c-jmpc6\" (UID: \"dd2f6d82-d306-46dc-a938-2394b017b906\") " pod="openshift-dns-operator/dns-operator-744455d44c-jmpc6" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990130 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xm5b\" (UniqueName: \"kubernetes.io/projected/0d637d59-da07-4756-8234-e17cba93e1b0-kube-api-access-6xm5b\") pod \"console-operator-58897d9998-28kdr\" (UID: \"0d637d59-da07-4756-8234-e17cba93e1b0\") " pod="openshift-console-operator/console-operator-58897d9998-28kdr" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990159 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scf72\" (UniqueName: \"kubernetes.io/projected/d96093ab-8af5-4e3c-b89e-601cd9581b80-kube-api-access-scf72\") pod \"cluster-image-registry-operator-dc59b4c8b-rdbnh\" (UID: \"d96093ab-8af5-4e3c-b89e-601cd9581b80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990186 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-config\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990203 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d504518-949c-45ca-8fc7-2f7e1d00f611-ca-trust-extracted\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990220 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d504518-949c-45ca-8fc7-2f7e1d00f611-trusted-ca\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990236 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d96093ab-8af5-4e3c-b89e-601cd9581b80-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rdbnh\" (UID: \"d96093ab-8af5-4e3c-b89e-601cd9581b80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990269 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs947\" (UniqueName: \"kubernetes.io/projected/dd2f6d82-d306-46dc-a938-2394b017b906-kube-api-access-zs947\") pod \"dns-operator-744455d44c-jmpc6\" (UID: \"dd2f6d82-d306-46dc-a938-2394b017b906\") " pod="openshift-dns-operator/dns-operator-744455d44c-jmpc6" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990286 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f2b47509-6f1d-40c5-94d7-10aa37fa5dce-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ggbf6\" (UID: \"f2b47509-6f1d-40c5-94d7-10aa37fa5dce\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990318 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24t4s\" (UniqueName: \"kubernetes.io/projected/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-kube-api-access-24t4s\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990333 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpwxr\" (UniqueName: \"kubernetes.io/projected/e8d1e83c-c1a5-4565-b1bc-454b416c6039-kube-api-access-jpwxr\") pod \"downloads-7954f5f757-2bcp4\" (UID: \"e8d1e83c-c1a5-4565-b1bc-454b416c6039\") " pod="openshift-console/downloads-7954f5f757-2bcp4" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990366 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990386 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990403 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-oauth-config\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990420 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d637d59-da07-4756-8234-e17cba93e1b0-config\") pod \"console-operator-58897d9998-28kdr\" (UID: \"0d637d59-da07-4756-8234-e17cba93e1b0\") " pod="openshift-console-operator/console-operator-58897d9998-28kdr" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990445 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990469 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-audit-policies\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990484 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl482\" (UniqueName: \"kubernetes.io/projected/5ce586ff-70fb-4890-9044-5693734e5d8e-kube-api-access-dl482\") pod \"openshift-controller-manager-operator-756b6f6bc6-grxdz\" (UID: \"5ce586ff-70fb-4890-9044-5693734e5d8e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grxdz" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990522 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlw4r\" (UniqueName: \"kubernetes.io/projected/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-kube-api-access-zlw4r\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990544 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990560 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b47509-6f1d-40c5-94d7-10aa37fa5dce-serving-cert\") pod \"openshift-config-operator-7777fb866f-ggbf6\" (UID: \"f2b47509-6f1d-40c5-94d7-10aa37fa5dce\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990596 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-bound-sa-token\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990614 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990632 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990672 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-service-ca\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990689 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990716 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d504518-949c-45ca-8fc7-2f7e1d00f611-installation-pull-secrets\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:35 crc kubenswrapper[4730]: I0131 16:32:35.990733 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c8w4\" (UniqueName: \"kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-kube-api-access-7c8w4\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:35 crc kubenswrapper[4730]: E0131 16:32:35.991407 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:36.491392841 +0000 UTC m=+143.297449877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.007080 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.007099 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjcxz"] Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.027009 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.048514 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.068455 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.068723 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-frj85"] Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.073394 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4cfvt"] Jan 31 16:32:36 crc kubenswrapper[4730]: W0131 16:32:36.074702 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47607256_aa97_41f0_9847_fdd1b79766ff.slice/crio-eb6dbe89bd964f3ed683be0192eda20240f4ff6bcd87cdde83f4718b2408c4f7 WatchSource:0}: Error finding container eb6dbe89bd964f3ed683be0192eda20240f4ff6bcd87cdde83f4718b2408c4f7: Status 404 returned error can't find the container with id eb6dbe89bd964f3ed683be0192eda20240f4ff6bcd87cdde83f4718b2408c4f7 Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.088030 4730 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.091115 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:36 crc kubenswrapper[4730]: E0131 16:32:36.091236 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:36.591213885 +0000 UTC m=+143.397270801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.091374 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs947\" (UniqueName: \"kubernetes.io/projected/dd2f6d82-d306-46dc-a938-2394b017b906-kube-api-access-zs947\") pod \"dns-operator-744455d44c-jmpc6\" (UID: \"dd2f6d82-d306-46dc-a938-2394b017b906\") " pod="openshift-dns-operator/dns-operator-744455d44c-jmpc6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.091399 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f2b47509-6f1d-40c5-94d7-10aa37fa5dce-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ggbf6\" (UID: \"f2b47509-6f1d-40c5-94d7-10aa37fa5dce\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.091446 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24t4s\" (UniqueName: \"kubernetes.io/projected/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-kube-api-access-24t4s\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.091467 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpwxr\" (UniqueName: \"kubernetes.io/projected/e8d1e83c-c1a5-4565-b1bc-454b416c6039-kube-api-access-jpwxr\") pod \"downloads-7954f5f757-2bcp4\" (UID: \"e8d1e83c-c1a5-4565-b1bc-454b416c6039\") " pod="openshift-console/downloads-7954f5f757-2bcp4" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.091518 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a966345-1030-44c4-bf3a-6547e5d3aeda-proxy-tls\") pod \"machine-config-operator-74547568cd-nxqw5\" (UID: \"3a966345-1030-44c4-bf3a-6547e5d3aeda\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.091539 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.091755 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f2b47509-6f1d-40c5-94d7-10aa37fa5dce-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ggbf6\" (UID: \"f2b47509-6f1d-40c5-94d7-10aa37fa5dce\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" Jan 31 16:32:36 crc kubenswrapper[4730]: E0131 16:32:36.091896 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:36.591868594 +0000 UTC m=+143.397925510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092194 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6795c6e3-2333-4112-9ee7-b6074347208b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wql8z\" (UID: \"6795c6e3-2333-4112-9ee7-b6074347208b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wql8z" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092237 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/817788cb-28d2-41a7-a5c8-b19287a6aa8b-certs\") pod \"machine-config-server-nxpmk\" (UID: \"817788cb-28d2-41a7-a5c8-b19287a6aa8b\") " pod="openshift-machine-config-operator/machine-config-server-nxpmk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092262 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d637d59-da07-4756-8234-e17cba93e1b0-config\") pod \"console-operator-58897d9998-28kdr\" (UID: \"0d637d59-da07-4756-8234-e17cba93e1b0\") " pod="openshift-console-operator/console-operator-58897d9998-28kdr" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092279 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5104074c-31a4-4e5f-af89-97ad9a1ab8ad-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ks8gz\" (UID: \"5104074c-31a4-4e5f-af89-97ad9a1ab8ad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ks8gz" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092294 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0057e5b1-8c91-43c4-86ed-337c6e69caf9-profile-collector-cert\") pod \"catalog-operator-68c6474976-tnfvq\" (UID: \"0057e5b1-8c91-43c4-86ed-337c6e69caf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092330 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sglh\" (UniqueName: \"kubernetes.io/projected/f4115c67-25d3-4bdd-81ca-b63122b92fda-kube-api-access-7sglh\") pod \"service-ca-9c57cc56f-j5kgc\" (UID: \"f4115c67-25d3-4bdd-81ca-b63122b92fda\") " pod="openshift-service-ca/service-ca-9c57cc56f-j5kgc" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092348 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-audit-policies\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092363 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl482\" (UniqueName: \"kubernetes.io/projected/5ce586ff-70fb-4890-9044-5693734e5d8e-kube-api-access-dl482\") pod \"openshift-controller-manager-operator-756b6f6bc6-grxdz\" (UID: \"5ce586ff-70fb-4890-9044-5693734e5d8e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grxdz" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092397 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5625c912-fe62-4364-9ca3-006d0bfbd502-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qf5fm\" (UID: \"5625c912-fe62-4364-9ca3-006d0bfbd502\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qf5fm" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092425 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlw4r\" (UniqueName: \"kubernetes.io/projected/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-kube-api-access-zlw4r\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092443 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/216a1f0f-785a-4dfa-b084-501b799637b7-registration-dir\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092475 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092493 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e6f6285d-f680-4eec-ad4d-b9375b31bd21-proxy-tls\") pod \"machine-config-controller-84d6567774-vcssw\" (UID: \"e6f6285d-f680-4eec-ad4d-b9375b31bd21\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092511 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-bound-sa-token\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092527 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5625c912-fe62-4364-9ca3-006d0bfbd502-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qf5fm\" (UID: \"5625c912-fe62-4364-9ca3-006d0bfbd502\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qf5fm" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092565 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092582 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b61a61bd-3aaa-42b6-9681-2945b18462c2-secret-volume\") pod \"collect-profiles-29497950-c6ftl\" (UID: \"b61a61bd-3aaa-42b6-9681-2945b18462c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092603 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c83601b9-c609-468f-8c2d-34a8a94e42d1-config\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092650 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whjtg\" (UniqueName: \"kubernetes.io/projected/e0ac8516-d776-4d92-933e-1d6a8d427d5f-kube-api-access-whjtg\") pod \"olm-operator-6b444d44fb-ttmdv\" (UID: \"e0ac8516-d776-4d92-933e-1d6a8d427d5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092674 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72vvl\" (UniqueName: \"kubernetes.io/projected/7798a0ca-0eb6-49e0-b531-e021ddbb7587-kube-api-access-72vvl\") pod \"package-server-manager-789f6589d5-bzzjv\" (UID: \"7798a0ca-0eb6-49e0-b531-e021ddbb7587\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bzzjv" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092692 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c83601b9-c609-468f-8c2d-34a8a94e42d1-serving-cert\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092712 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fd7b5061-34b1-4b64-a7fc-1b4a0b70b366-apiservice-cert\") pod \"packageserver-d55dfcdfc-9hl7b\" (UID: \"fd7b5061-34b1-4b64-a7fc-1b4a0b70b366\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092729 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/887bb6af-277c-4837-b71a-6a94d0eb2edf-stats-auth\") pod \"router-default-5444994796-jwc2k\" (UID: \"887bb6af-277c-4837-b71a-6a94d0eb2edf\") " pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092746 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/201151bb-7b5e-4564-ae1c-9b0b76e19778-config\") pod \"kube-apiserver-operator-766d6c64bb-h6tq7\" (UID: \"201151bb-7b5e-4564-ae1c-9b0b76e19778\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6tq7" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092770 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-service-ca\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092791 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d504518-949c-45ca-8fc7-2f7e1d00f611-installation-pull-secrets\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092830 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c8w4\" (UniqueName: \"kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-kube-api-access-7c8w4\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092847 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mwfp\" (UniqueName: \"kubernetes.io/projected/0057e5b1-8c91-43c4-86ed-337c6e69caf9-kube-api-access-5mwfp\") pod \"catalog-operator-68c6474976-tnfvq\" (UID: \"0057e5b1-8c91-43c4-86ed-337c6e69caf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092866 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d637d59-da07-4756-8234-e17cba93e1b0-serving-cert\") pod \"console-operator-58897d9998-28kdr\" (UID: \"0d637d59-da07-4756-8234-e17cba93e1b0\") " pod="openshift-console-operator/console-operator-58897d9998-28kdr" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092886 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee6fccbc-e15d-4cbb-a200-b77420363b3f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc466\" (UID: \"ee6fccbc-e15d-4cbb-a200-b77420363b3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc466" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092904 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c83601b9-c609-468f-8c2d-34a8a94e42d1-etcd-service-ca\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092921 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d637d59-da07-4756-8234-e17cba93e1b0-trusted-ca\") pod \"console-operator-58897d9998-28kdr\" (UID: \"0d637d59-da07-4756-8234-e17cba93e1b0\") " pod="openshift-console-operator/console-operator-58897d9998-28kdr" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.092938 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c83601b9-c609-468f-8c2d-34a8a94e42d1-etcd-ca\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.093068 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.093086 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3a966345-1030-44c4-bf3a-6547e5d3aeda-images\") pod \"machine-config-operator-74547568cd-nxqw5\" (UID: \"3a966345-1030-44c4-bf3a-6547e5d3aeda\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.093108 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55-config\") pod \"service-ca-operator-777779d784-fl66m\" (UID: \"a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl66m" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.093131 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/983dfbbb-8bc4-4935-b359-c885fc748600-trusted-ca\") pod \"ingress-operator-5b745b69d9-5k6fd\" (UID: \"983dfbbb-8bc4-4935-b359-c885fc748600\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.093147 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7798a0ca-0eb6-49e0-b531-e021ddbb7587-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bzzjv\" (UID: \"7798a0ca-0eb6-49e0-b531-e021ddbb7587\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bzzjv" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.093166 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce586ff-70fb-4890-9044-5693734e5d8e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-grxdz\" (UID: \"5ce586ff-70fb-4890-9044-5693734e5d8e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grxdz" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.093185 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6f6285d-f680-4eec-ad4d-b9375b31bd21-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vcssw\" (UID: \"e6f6285d-f680-4eec-ad4d-b9375b31bd21\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.093214 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e0ac8516-d776-4d92-933e-1d6a8d427d5f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ttmdv\" (UID: \"e0ac8516-d776-4d92-933e-1d6a8d427d5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.093231 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2hjv\" (UniqueName: \"kubernetes.io/projected/ee6fccbc-e15d-4cbb-a200-b77420363b3f-kube-api-access-g2hjv\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc466\" (UID: \"ee6fccbc-e15d-4cbb-a200-b77420363b3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc466" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.093246 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6795c6e3-2333-4112-9ee7-b6074347208b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wql8z\" (UID: \"6795c6e3-2333-4112-9ee7-b6074347208b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wql8z" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.093261 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/817788cb-28d2-41a7-a5c8-b19287a6aa8b-node-bootstrap-token\") pod \"machine-config-server-nxpmk\" (UID: \"817788cb-28d2-41a7-a5c8-b19287a6aa8b\") " pod="openshift-machine-config-operator/machine-config-server-nxpmk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.093321 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/216a1f0f-785a-4dfa-b084-501b799637b7-plugins-dir\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.093338 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct744\" (UniqueName: \"kubernetes.io/projected/3a966345-1030-44c4-bf3a-6547e5d3aeda-kube-api-access-ct744\") pod \"machine-config-operator-74547568cd-nxqw5\" (UID: \"3a966345-1030-44c4-bf3a-6547e5d3aeda\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.093355 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-serving-cert\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094418 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d96093ab-8af5-4e3c-b89e-601cd9581b80-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rdbnh\" (UID: \"d96093ab-8af5-4e3c-b89e-601cd9581b80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094446 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094471 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9njq\" (UniqueName: \"kubernetes.io/projected/c83601b9-c609-468f-8c2d-34a8a94e42d1-kube-api-access-q9njq\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094507 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/887bb6af-277c-4837-b71a-6a94d0eb2edf-default-certificate\") pod \"router-default-5444994796-jwc2k\" (UID: \"887bb6af-277c-4837-b71a-6a94d0eb2edf\") " pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094526 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d504518-949c-45ca-8fc7-2f7e1d00f611-registry-certificates\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094543 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094561 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8srld\" (UniqueName: \"kubernetes.io/projected/a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55-kube-api-access-8srld\") pod \"service-ca-operator-777779d784-fl66m\" (UID: \"a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl66m" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094576 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nfxv\" (UniqueName: \"kubernetes.io/projected/469740fc-098b-4156-b459-02d7a1afefab-kube-api-access-2nfxv\") pod \"ingress-canary-wc5vj\" (UID: \"469740fc-098b-4156-b459-02d7a1afefab\") " pod="openshift-ingress-canary/ingress-canary-wc5vj" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094607 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dd2f6d82-d306-46dc-a938-2394b017b906-metrics-tls\") pod \"dns-operator-744455d44c-jmpc6\" (UID: \"dd2f6d82-d306-46dc-a938-2394b017b906\") " pod="openshift-dns-operator/dns-operator-744455d44c-jmpc6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094623 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scf72\" (UniqueName: \"kubernetes.io/projected/d96093ab-8af5-4e3c-b89e-601cd9581b80-kube-api-access-scf72\") pod \"cluster-image-registry-operator-dc59b4c8b-rdbnh\" (UID: \"d96093ab-8af5-4e3c-b89e-601cd9581b80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094639 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0057e5b1-8c91-43c4-86ed-337c6e69caf9-srv-cert\") pod \"catalog-operator-68c6474976-tnfvq\" (UID: \"0057e5b1-8c91-43c4-86ed-337c6e69caf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094657 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-registry-tls\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094674 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-trusted-ca-bundle\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094691 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsl4q\" (UniqueName: \"kubernetes.io/projected/216a1f0f-785a-4dfa-b084-501b799637b7-kube-api-access-dsl4q\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094708 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-config\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094725 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55-serving-cert\") pod \"service-ca-operator-777779d784-fl66m\" (UID: \"a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl66m" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094743 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d504518-949c-45ca-8fc7-2f7e1d00f611-trusted-ca\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094765 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9fz2\" (UniqueName: \"kubernetes.io/projected/3ba1ee3d-4cef-4fc3-8c31-5f544dd56244-kube-api-access-q9fz2\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5xfm\" (UID: \"3ba1ee3d-4cef-4fc3-8c31-5f544dd56244\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5xfm" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094784 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6d0cf39-4835-4f5d-8c5a-9521331913ac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-txbq6\" (UID: \"d6d0cf39-4835-4f5d-8c5a-9521331913ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094816 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f4115c67-25d3-4bdd-81ca-b63122b92fda-signing-cabundle\") pod \"service-ca-9c57cc56f-j5kgc\" (UID: \"f4115c67-25d3-4bdd-81ca-b63122b92fda\") " pod="openshift-service-ca/service-ca-9c57cc56f-j5kgc" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094835 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094850 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/201151bb-7b5e-4564-ae1c-9b0b76e19778-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h6tq7\" (UID: \"201151bb-7b5e-4564-ae1c-9b0b76e19778\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6tq7" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094867 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-oauth-config\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094882 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/201151bb-7b5e-4564-ae1c-9b0b76e19778-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h6tq7\" (UID: \"201151bb-7b5e-4564-ae1c-9b0b76e19778\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6tq7" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094899 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094916 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v58vs\" (UniqueName: \"kubernetes.io/projected/e6f6285d-f680-4eec-ad4d-b9375b31bd21-kube-api-access-v58vs\") pod \"machine-config-controller-84d6567774-vcssw\" (UID: \"e6f6285d-f680-4eec-ad4d-b9375b31bd21\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094935 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/216a1f0f-785a-4dfa-b084-501b799637b7-mountpoint-dir\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094952 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wq8f\" (UniqueName: \"kubernetes.io/projected/d6d0cf39-4835-4f5d-8c5a-9521331913ac-kube-api-access-7wq8f\") pod \"marketplace-operator-79b997595-txbq6\" (UID: \"d6d0cf39-4835-4f5d-8c5a-9521331913ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094974 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/216a1f0f-785a-4dfa-b084-501b799637b7-csi-data-dir\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.094988 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/983dfbbb-8bc4-4935-b359-c885fc748600-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5k6fd\" (UID: \"983dfbbb-8bc4-4935-b359-c885fc748600\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095001 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fd7b5061-34b1-4b64-a7fc-1b4a0b70b366-tmpfs\") pod \"packageserver-d55dfcdfc-9hl7b\" (UID: \"fd7b5061-34b1-4b64-a7fc-1b4a0b70b366\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095016 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee6fccbc-e15d-4cbb-a200-b77420363b3f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc466\" (UID: \"ee6fccbc-e15d-4cbb-a200-b77420363b3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc466" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095033 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095051 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b47509-6f1d-40c5-94d7-10aa37fa5dce-serving-cert\") pod \"openshift-config-operator-7777fb866f-ggbf6\" (UID: \"f2b47509-6f1d-40c5-94d7-10aa37fa5dce\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095066 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pvj5\" (UniqueName: \"kubernetes.io/projected/817788cb-28d2-41a7-a5c8-b19287a6aa8b-kube-api-access-7pvj5\") pod \"machine-config-server-nxpmk\" (UID: \"817788cb-28d2-41a7-a5c8-b19287a6aa8b\") " pod="openshift-machine-config-operator/machine-config-server-nxpmk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095082 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/887bb6af-277c-4837-b71a-6a94d0eb2edf-metrics-certs\") pod \"router-default-5444994796-jwc2k\" (UID: \"887bb6af-277c-4837-b71a-6a94d0eb2edf\") " pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095098 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f4115c67-25d3-4bdd-81ca-b63122b92fda-signing-key\") pod \"service-ca-9c57cc56f-j5kgc\" (UID: \"f4115c67-25d3-4bdd-81ca-b63122b92fda\") " pod="openshift-service-ca/service-ca-9c57cc56f-j5kgc" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095112 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ed0e8d6-c52f-421e-afc6-58098dfaf5a8-config-volume\") pod \"dns-default-5vcn4\" (UID: \"6ed0e8d6-c52f-421e-afc6-58098dfaf5a8\") " pod="openshift-dns/dns-default-5vcn4" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095126 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ed0e8d6-c52f-421e-afc6-58098dfaf5a8-metrics-tls\") pod \"dns-default-5vcn4\" (UID: \"6ed0e8d6-c52f-421e-afc6-58098dfaf5a8\") " pod="openshift-dns/dns-default-5vcn4" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095144 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwh6w\" (UniqueName: \"kubernetes.io/projected/fd7b5061-34b1-4b64-a7fc-1b4a0b70b366-kube-api-access-dwh6w\") pod \"packageserver-d55dfcdfc-9hl7b\" (UID: \"fd7b5061-34b1-4b64-a7fc-1b4a0b70b366\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095163 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42sz6\" (UniqueName: \"kubernetes.io/projected/edec59e3-15cb-4032-a5a9-e25e12cc6e9e-kube-api-access-42sz6\") pod \"migrator-59844c95c7-ntvr6\" (UID: \"edec59e3-15cb-4032-a5a9-e25e12cc6e9e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ntvr6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095188 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095207 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/983dfbbb-8bc4-4935-b359-c885fc748600-metrics-tls\") pod \"ingress-operator-5b745b69d9-5k6fd\" (UID: \"983dfbbb-8bc4-4935-b359-c885fc748600\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095237 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a966345-1030-44c4-bf3a-6547e5d3aeda-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nxqw5\" (UID: \"3a966345-1030-44c4-bf3a-6547e5d3aeda\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095265 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-audit-dir\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095282 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d6d0cf39-4835-4f5d-8c5a-9521331913ac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-txbq6\" (UID: \"d6d0cf39-4835-4f5d-8c5a-9521331913ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095298 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ba1ee3d-4cef-4fc3-8c31-5f544dd56244-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5xfm\" (UID: \"3ba1ee3d-4cef-4fc3-8c31-5f544dd56244\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5xfm" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095315 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt7wh\" (UniqueName: \"kubernetes.io/projected/983dfbbb-8bc4-4935-b359-c885fc748600-kube-api-access-kt7wh\") pod \"ingress-operator-5b745b69d9-5k6fd\" (UID: \"983dfbbb-8bc4-4935-b359-c885fc748600\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095331 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbnqd\" (UniqueName: \"kubernetes.io/projected/5104074c-31a4-4e5f-af89-97ad9a1ab8ad-kube-api-access-hbnqd\") pod \"multus-admission-controller-857f4d67dd-ks8gz\" (UID: \"5104074c-31a4-4e5f-af89-97ad9a1ab8ad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ks8gz" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095347 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095363 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ce586ff-70fb-4890-9044-5693734e5d8e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-grxdz\" (UID: \"5ce586ff-70fb-4890-9044-5693734e5d8e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grxdz" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095379 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e0ac8516-d776-4d92-933e-1d6a8d427d5f-srv-cert\") pod \"olm-operator-6b444d44fb-ttmdv\" (UID: \"e0ac8516-d776-4d92-933e-1d6a8d427d5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095396 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/216a1f0f-785a-4dfa-b084-501b799637b7-socket-dir\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095427 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d96093ab-8af5-4e3c-b89e-601cd9581b80-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rdbnh\" (UID: \"d96093ab-8af5-4e3c-b89e-601cd9581b80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095446 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095463 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fd7b5061-34b1-4b64-a7fc-1b4a0b70b366-webhook-cert\") pod \"packageserver-d55dfcdfc-9hl7b\" (UID: \"fd7b5061-34b1-4b64-a7fc-1b4a0b70b366\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095480 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-oauth-serving-cert\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095496 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpcb2\" (UniqueName: \"kubernetes.io/projected/f2b47509-6f1d-40c5-94d7-10aa37fa5dce-kube-api-access-xpcb2\") pod \"openshift-config-operator-7777fb866f-ggbf6\" (UID: \"f2b47509-6f1d-40c5-94d7-10aa37fa5dce\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095516 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/887bb6af-277c-4837-b71a-6a94d0eb2edf-service-ca-bundle\") pod \"router-default-5444994796-jwc2k\" (UID: \"887bb6af-277c-4837-b71a-6a94d0eb2edf\") " pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095534 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/469740fc-098b-4156-b459-02d7a1afefab-cert\") pod \"ingress-canary-wc5vj\" (UID: \"469740fc-098b-4156-b459-02d7a1afefab\") " pod="openshift-ingress-canary/ingress-canary-wc5vj" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095548 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c83601b9-c609-468f-8c2d-34a8a94e42d1-etcd-client\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095568 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmtpf\" (UniqueName: \"kubernetes.io/projected/6ed0e8d6-c52f-421e-afc6-58098dfaf5a8-kube-api-access-xmtpf\") pod \"dns-default-5vcn4\" (UID: \"6ed0e8d6-c52f-421e-afc6-58098dfaf5a8\") " pod="openshift-dns/dns-default-5vcn4" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095584 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frn4n\" (UniqueName: \"kubernetes.io/projected/b61a61bd-3aaa-42b6-9681-2945b18462c2-kube-api-access-frn4n\") pod \"collect-profiles-29497950-c6ftl\" (UID: \"b61a61bd-3aaa-42b6-9681-2945b18462c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095599 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlhv4\" (UniqueName: \"kubernetes.io/projected/887bb6af-277c-4837-b71a-6a94d0eb2edf-kube-api-access-rlhv4\") pod \"router-default-5444994796-jwc2k\" (UID: \"887bb6af-277c-4837-b71a-6a94d0eb2edf\") " pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095615 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5625c912-fe62-4364-9ca3-006d0bfbd502-config\") pod \"kube-controller-manager-operator-78b949d7b-qf5fm\" (UID: \"5625c912-fe62-4364-9ca3-006d0bfbd502\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qf5fm" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095631 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6795c6e3-2333-4112-9ee7-b6074347208b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wql8z\" (UID: \"6795c6e3-2333-4112-9ee7-b6074347208b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wql8z" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095650 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xm5b\" (UniqueName: \"kubernetes.io/projected/0d637d59-da07-4756-8234-e17cba93e1b0-kube-api-access-6xm5b\") pod \"console-operator-58897d9998-28kdr\" (UID: \"0d637d59-da07-4756-8234-e17cba93e1b0\") " pod="openshift-console-operator/console-operator-58897d9998-28kdr" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095667 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b61a61bd-3aaa-42b6-9681-2945b18462c2-config-volume\") pod \"collect-profiles-29497950-c6ftl\" (UID: \"b61a61bd-3aaa-42b6-9681-2945b18462c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095706 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d504518-949c-45ca-8fc7-2f7e1d00f611-ca-trust-extracted\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.095722 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d96093ab-8af5-4e3c-b89e-601cd9581b80-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rdbnh\" (UID: \"d96093ab-8af5-4e3c-b89e-601cd9581b80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.096230 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-audit-policies\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.096448 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d637d59-da07-4756-8234-e17cba93e1b0-config\") pod \"console-operator-58897d9998-28kdr\" (UID: \"0d637d59-da07-4756-8234-e17cba93e1b0\") " pod="openshift-console-operator/console-operator-58897d9998-28kdr" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.097010 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-service-ca\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.101077 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.101214 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.101983 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-trusted-ca-bundle\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.102384 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d637d59-da07-4756-8234-e17cba93e1b0-trusted-ca\") pod \"console-operator-58897d9998-28kdr\" (UID: \"0d637d59-da07-4756-8234-e17cba93e1b0\") " pod="openshift-console-operator/console-operator-58897d9998-28kdr" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.102688 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-audit-dir\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.102908 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-config\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.103263 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.103866 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-serving-cert\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.104053 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d637d59-da07-4756-8234-e17cba93e1b0-serving-cert\") pod \"console-operator-58897d9998-28kdr\" (UID: \"0d637d59-da07-4756-8234-e17cba93e1b0\") " pod="openshift-console-operator/console-operator-58897d9998-28kdr" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.105590 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d504518-949c-45ca-8fc7-2f7e1d00f611-trusted-ca\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.106056 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d504518-949c-45ca-8fc7-2f7e1d00f611-ca-trust-extracted\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.106416 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.106715 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d96093ab-8af5-4e3c-b89e-601cd9581b80-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rdbnh\" (UID: \"d96093ab-8af5-4e3c-b89e-601cd9581b80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.106859 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.106927 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d504518-949c-45ca-8fc7-2f7e1d00f611-registry-certificates\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.107082 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.107979 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.107659 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-oauth-serving-cert\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.108370 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.108402 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dd2f6d82-d306-46dc-a938-2394b017b906-metrics-tls\") pod \"dns-operator-744455d44c-jmpc6\" (UID: \"dd2f6d82-d306-46dc-a938-2394b017b906\") " pod="openshift-dns-operator/dns-operator-744455d44c-jmpc6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.108603 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.108634 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.108734 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.111177 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b47509-6f1d-40c5-94d7-10aa37fa5dce-serving-cert\") pod \"openshift-config-operator-7777fb866f-ggbf6\" (UID: \"f2b47509-6f1d-40c5-94d7-10aa37fa5dce\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.111771 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ce586ff-70fb-4890-9044-5693734e5d8e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-grxdz\" (UID: \"5ce586ff-70fb-4890-9044-5693734e5d8e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grxdz" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.112291 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce586ff-70fb-4890-9044-5693734e5d8e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-grxdz\" (UID: \"5ce586ff-70fb-4890-9044-5693734e5d8e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grxdz" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.112290 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d96093ab-8af5-4e3c-b89e-601cd9581b80-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rdbnh\" (UID: \"d96093ab-8af5-4e3c-b89e-601cd9581b80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.112424 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-registry-tls\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.113838 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d504518-949c-45ca-8fc7-2f7e1d00f611-installation-pull-secrets\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.114900 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-oauth-config\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.116853 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.130059 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9pj8k"] Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.130490 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.134317 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc"] Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.141403 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls"] Jan 31 16:32:36 crc kubenswrapper[4730]: W0131 16:32:36.145077 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4524a04_3cf1_48b4_9af1_ca47b1edf9e5.slice/crio-32479a8c7273661a71d34062c84bd01fa08272b2b4090050766b04cea6c07258 WatchSource:0}: Error finding container 32479a8c7273661a71d34062c84bd01fa08272b2b4090050766b04cea6c07258: Status 404 returned error can't find the container with id 32479a8c7273661a71d34062c84bd01fa08272b2b4090050766b04cea6c07258 Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.147659 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.166444 4730 request.go:700] Waited for 1.916835528s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.170201 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.189828 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197019 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197158 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/201151bb-7b5e-4564-ae1c-9b0b76e19778-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h6tq7\" (UID: \"201151bb-7b5e-4564-ae1c-9b0b76e19778\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6tq7" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197181 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/201151bb-7b5e-4564-ae1c-9b0b76e19778-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h6tq7\" (UID: \"201151bb-7b5e-4564-ae1c-9b0b76e19778\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6tq7" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197203 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v58vs\" (UniqueName: \"kubernetes.io/projected/e6f6285d-f680-4eec-ad4d-b9375b31bd21-kube-api-access-v58vs\") pod \"machine-config-controller-84d6567774-vcssw\" (UID: \"e6f6285d-f680-4eec-ad4d-b9375b31bd21\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197224 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/216a1f0f-785a-4dfa-b084-501b799637b7-mountpoint-dir\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197239 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wq8f\" (UniqueName: \"kubernetes.io/projected/d6d0cf39-4835-4f5d-8c5a-9521331913ac-kube-api-access-7wq8f\") pod \"marketplace-operator-79b997595-txbq6\" (UID: \"d6d0cf39-4835-4f5d-8c5a-9521331913ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197254 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/216a1f0f-785a-4dfa-b084-501b799637b7-csi-data-dir\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197269 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/983dfbbb-8bc4-4935-b359-c885fc748600-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5k6fd\" (UID: \"983dfbbb-8bc4-4935-b359-c885fc748600\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197284 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fd7b5061-34b1-4b64-a7fc-1b4a0b70b366-tmpfs\") pod \"packageserver-d55dfcdfc-9hl7b\" (UID: \"fd7b5061-34b1-4b64-a7fc-1b4a0b70b366\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197299 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee6fccbc-e15d-4cbb-a200-b77420363b3f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc466\" (UID: \"ee6fccbc-e15d-4cbb-a200-b77420363b3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc466" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197315 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pvj5\" (UniqueName: \"kubernetes.io/projected/817788cb-28d2-41a7-a5c8-b19287a6aa8b-kube-api-access-7pvj5\") pod \"machine-config-server-nxpmk\" (UID: \"817788cb-28d2-41a7-a5c8-b19287a6aa8b\") " pod="openshift-machine-config-operator/machine-config-server-nxpmk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197331 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/887bb6af-277c-4837-b71a-6a94d0eb2edf-metrics-certs\") pod \"router-default-5444994796-jwc2k\" (UID: \"887bb6af-277c-4837-b71a-6a94d0eb2edf\") " pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197345 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ed0e8d6-c52f-421e-afc6-58098dfaf5a8-config-volume\") pod \"dns-default-5vcn4\" (UID: \"6ed0e8d6-c52f-421e-afc6-58098dfaf5a8\") " pod="openshift-dns/dns-default-5vcn4" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197360 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f4115c67-25d3-4bdd-81ca-b63122b92fda-signing-key\") pod \"service-ca-9c57cc56f-j5kgc\" (UID: \"f4115c67-25d3-4bdd-81ca-b63122b92fda\") " pod="openshift-service-ca/service-ca-9c57cc56f-j5kgc" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197376 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwh6w\" (UniqueName: \"kubernetes.io/projected/fd7b5061-34b1-4b64-a7fc-1b4a0b70b366-kube-api-access-dwh6w\") pod \"packageserver-d55dfcdfc-9hl7b\" (UID: \"fd7b5061-34b1-4b64-a7fc-1b4a0b70b366\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197391 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ed0e8d6-c52f-421e-afc6-58098dfaf5a8-metrics-tls\") pod \"dns-default-5vcn4\" (UID: \"6ed0e8d6-c52f-421e-afc6-58098dfaf5a8\") " pod="openshift-dns/dns-default-5vcn4" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197409 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/983dfbbb-8bc4-4935-b359-c885fc748600-metrics-tls\") pod \"ingress-operator-5b745b69d9-5k6fd\" (UID: \"983dfbbb-8bc4-4935-b359-c885fc748600\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197427 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42sz6\" (UniqueName: \"kubernetes.io/projected/edec59e3-15cb-4032-a5a9-e25e12cc6e9e-kube-api-access-42sz6\") pod \"migrator-59844c95c7-ntvr6\" (UID: \"edec59e3-15cb-4032-a5a9-e25e12cc6e9e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ntvr6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197449 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a966345-1030-44c4-bf3a-6547e5d3aeda-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nxqw5\" (UID: \"3a966345-1030-44c4-bf3a-6547e5d3aeda\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197466 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ba1ee3d-4cef-4fc3-8c31-5f544dd56244-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5xfm\" (UID: \"3ba1ee3d-4cef-4fc3-8c31-5f544dd56244\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5xfm" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197483 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d6d0cf39-4835-4f5d-8c5a-9521331913ac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-txbq6\" (UID: \"d6d0cf39-4835-4f5d-8c5a-9521331913ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197498 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt7wh\" (UniqueName: \"kubernetes.io/projected/983dfbbb-8bc4-4935-b359-c885fc748600-kube-api-access-kt7wh\") pod \"ingress-operator-5b745b69d9-5k6fd\" (UID: \"983dfbbb-8bc4-4935-b359-c885fc748600\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197514 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbnqd\" (UniqueName: \"kubernetes.io/projected/5104074c-31a4-4e5f-af89-97ad9a1ab8ad-kube-api-access-hbnqd\") pod \"multus-admission-controller-857f4d67dd-ks8gz\" (UID: \"5104074c-31a4-4e5f-af89-97ad9a1ab8ad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ks8gz" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197530 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e0ac8516-d776-4d92-933e-1d6a8d427d5f-srv-cert\") pod \"olm-operator-6b444d44fb-ttmdv\" (UID: \"e0ac8516-d776-4d92-933e-1d6a8d427d5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197547 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/216a1f0f-785a-4dfa-b084-501b799637b7-socket-dir\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197573 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fd7b5061-34b1-4b64-a7fc-1b4a0b70b366-webhook-cert\") pod \"packageserver-d55dfcdfc-9hl7b\" (UID: \"fd7b5061-34b1-4b64-a7fc-1b4a0b70b366\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197595 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/887bb6af-277c-4837-b71a-6a94d0eb2edf-service-ca-bundle\") pod \"router-default-5444994796-jwc2k\" (UID: \"887bb6af-277c-4837-b71a-6a94d0eb2edf\") " pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197609 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/469740fc-098b-4156-b459-02d7a1afefab-cert\") pod \"ingress-canary-wc5vj\" (UID: \"469740fc-098b-4156-b459-02d7a1afefab\") " pod="openshift-ingress-canary/ingress-canary-wc5vj" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197626 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c83601b9-c609-468f-8c2d-34a8a94e42d1-etcd-client\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197641 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6795c6e3-2333-4112-9ee7-b6074347208b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wql8z\" (UID: \"6795c6e3-2333-4112-9ee7-b6074347208b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wql8z" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197654 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmtpf\" (UniqueName: \"kubernetes.io/projected/6ed0e8d6-c52f-421e-afc6-58098dfaf5a8-kube-api-access-xmtpf\") pod \"dns-default-5vcn4\" (UID: \"6ed0e8d6-c52f-421e-afc6-58098dfaf5a8\") " pod="openshift-dns/dns-default-5vcn4" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197671 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frn4n\" (UniqueName: \"kubernetes.io/projected/b61a61bd-3aaa-42b6-9681-2945b18462c2-kube-api-access-frn4n\") pod \"collect-profiles-29497950-c6ftl\" (UID: \"b61a61bd-3aaa-42b6-9681-2945b18462c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197688 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlhv4\" (UniqueName: \"kubernetes.io/projected/887bb6af-277c-4837-b71a-6a94d0eb2edf-kube-api-access-rlhv4\") pod \"router-default-5444994796-jwc2k\" (UID: \"887bb6af-277c-4837-b71a-6a94d0eb2edf\") " pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197704 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5625c912-fe62-4364-9ca3-006d0bfbd502-config\") pod \"kube-controller-manager-operator-78b949d7b-qf5fm\" (UID: \"5625c912-fe62-4364-9ca3-006d0bfbd502\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qf5fm" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197724 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b61a61bd-3aaa-42b6-9681-2945b18462c2-config-volume\") pod \"collect-profiles-29497950-c6ftl\" (UID: \"b61a61bd-3aaa-42b6-9681-2945b18462c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197744 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a966345-1030-44c4-bf3a-6547e5d3aeda-proxy-tls\") pod \"machine-config-operator-74547568cd-nxqw5\" (UID: \"3a966345-1030-44c4-bf3a-6547e5d3aeda\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197781 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0057e5b1-8c91-43c4-86ed-337c6e69caf9-profile-collector-cert\") pod \"catalog-operator-68c6474976-tnfvq\" (UID: \"0057e5b1-8c91-43c4-86ed-337c6e69caf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197796 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6795c6e3-2333-4112-9ee7-b6074347208b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wql8z\" (UID: \"6795c6e3-2333-4112-9ee7-b6074347208b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wql8z" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197832 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/817788cb-28d2-41a7-a5c8-b19287a6aa8b-certs\") pod \"machine-config-server-nxpmk\" (UID: \"817788cb-28d2-41a7-a5c8-b19287a6aa8b\") " pod="openshift-machine-config-operator/machine-config-server-nxpmk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197847 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5104074c-31a4-4e5f-af89-97ad9a1ab8ad-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ks8gz\" (UID: \"5104074c-31a4-4e5f-af89-97ad9a1ab8ad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ks8gz" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197864 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sglh\" (UniqueName: \"kubernetes.io/projected/f4115c67-25d3-4bdd-81ca-b63122b92fda-kube-api-access-7sglh\") pod \"service-ca-9c57cc56f-j5kgc\" (UID: \"f4115c67-25d3-4bdd-81ca-b63122b92fda\") " pod="openshift-service-ca/service-ca-9c57cc56f-j5kgc" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197883 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5625c912-fe62-4364-9ca3-006d0bfbd502-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qf5fm\" (UID: \"5625c912-fe62-4364-9ca3-006d0bfbd502\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qf5fm" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197910 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/216a1f0f-785a-4dfa-b084-501b799637b7-registration-dir\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197928 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e6f6285d-f680-4eec-ad4d-b9375b31bd21-proxy-tls\") pod \"machine-config-controller-84d6567774-vcssw\" (UID: \"e6f6285d-f680-4eec-ad4d-b9375b31bd21\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197943 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c83601b9-c609-468f-8c2d-34a8a94e42d1-config\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197958 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5625c912-fe62-4364-9ca3-006d0bfbd502-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qf5fm\" (UID: \"5625c912-fe62-4364-9ca3-006d0bfbd502\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qf5fm" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197973 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b61a61bd-3aaa-42b6-9681-2945b18462c2-secret-volume\") pod \"collect-profiles-29497950-c6ftl\" (UID: \"b61a61bd-3aaa-42b6-9681-2945b18462c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.197990 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whjtg\" (UniqueName: \"kubernetes.io/projected/e0ac8516-d776-4d92-933e-1d6a8d427d5f-kube-api-access-whjtg\") pod \"olm-operator-6b444d44fb-ttmdv\" (UID: \"e0ac8516-d776-4d92-933e-1d6a8d427d5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198007 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72vvl\" (UniqueName: \"kubernetes.io/projected/7798a0ca-0eb6-49e0-b531-e021ddbb7587-kube-api-access-72vvl\") pod \"package-server-manager-789f6589d5-bzzjv\" (UID: \"7798a0ca-0eb6-49e0-b531-e021ddbb7587\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bzzjv" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198020 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c83601b9-c609-468f-8c2d-34a8a94e42d1-serving-cert\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198036 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fd7b5061-34b1-4b64-a7fc-1b4a0b70b366-apiservice-cert\") pod \"packageserver-d55dfcdfc-9hl7b\" (UID: \"fd7b5061-34b1-4b64-a7fc-1b4a0b70b366\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198050 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/887bb6af-277c-4837-b71a-6a94d0eb2edf-stats-auth\") pod \"router-default-5444994796-jwc2k\" (UID: \"887bb6af-277c-4837-b71a-6a94d0eb2edf\") " pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198064 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/201151bb-7b5e-4564-ae1c-9b0b76e19778-config\") pod \"kube-apiserver-operator-766d6c64bb-h6tq7\" (UID: \"201151bb-7b5e-4564-ae1c-9b0b76e19778\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6tq7" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198080 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mwfp\" (UniqueName: \"kubernetes.io/projected/0057e5b1-8c91-43c4-86ed-337c6e69caf9-kube-api-access-5mwfp\") pod \"catalog-operator-68c6474976-tnfvq\" (UID: \"0057e5b1-8c91-43c4-86ed-337c6e69caf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198105 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee6fccbc-e15d-4cbb-a200-b77420363b3f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc466\" (UID: \"ee6fccbc-e15d-4cbb-a200-b77420363b3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc466" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198126 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c83601b9-c609-468f-8c2d-34a8a94e42d1-etcd-service-ca\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198166 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c83601b9-c609-468f-8c2d-34a8a94e42d1-etcd-ca\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198185 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3a966345-1030-44c4-bf3a-6547e5d3aeda-images\") pod \"machine-config-operator-74547568cd-nxqw5\" (UID: \"3a966345-1030-44c4-bf3a-6547e5d3aeda\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198202 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7798a0ca-0eb6-49e0-b531-e021ddbb7587-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bzzjv\" (UID: \"7798a0ca-0eb6-49e0-b531-e021ddbb7587\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bzzjv" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198217 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55-config\") pod \"service-ca-operator-777779d784-fl66m\" (UID: \"a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl66m" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198232 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/983dfbbb-8bc4-4935-b359-c885fc748600-trusted-ca\") pod \"ingress-operator-5b745b69d9-5k6fd\" (UID: \"983dfbbb-8bc4-4935-b359-c885fc748600\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198248 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6f6285d-f680-4eec-ad4d-b9375b31bd21-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vcssw\" (UID: \"e6f6285d-f680-4eec-ad4d-b9375b31bd21\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198262 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6795c6e3-2333-4112-9ee7-b6074347208b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wql8z\" (UID: \"6795c6e3-2333-4112-9ee7-b6074347208b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wql8z" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198278 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e0ac8516-d776-4d92-933e-1d6a8d427d5f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ttmdv\" (UID: \"e0ac8516-d776-4d92-933e-1d6a8d427d5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198295 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2hjv\" (UniqueName: \"kubernetes.io/projected/ee6fccbc-e15d-4cbb-a200-b77420363b3f-kube-api-access-g2hjv\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc466\" (UID: \"ee6fccbc-e15d-4cbb-a200-b77420363b3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc466" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198312 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct744\" (UniqueName: \"kubernetes.io/projected/3a966345-1030-44c4-bf3a-6547e5d3aeda-kube-api-access-ct744\") pod \"machine-config-operator-74547568cd-nxqw5\" (UID: \"3a966345-1030-44c4-bf3a-6547e5d3aeda\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198327 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/817788cb-28d2-41a7-a5c8-b19287a6aa8b-node-bootstrap-token\") pod \"machine-config-server-nxpmk\" (UID: \"817788cb-28d2-41a7-a5c8-b19287a6aa8b\") " pod="openshift-machine-config-operator/machine-config-server-nxpmk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198342 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/216a1f0f-785a-4dfa-b084-501b799637b7-plugins-dir\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198363 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9njq\" (UniqueName: \"kubernetes.io/projected/c83601b9-c609-468f-8c2d-34a8a94e42d1-kube-api-access-q9njq\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198402 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/887bb6af-277c-4837-b71a-6a94d0eb2edf-default-certificate\") pod \"router-default-5444994796-jwc2k\" (UID: \"887bb6af-277c-4837-b71a-6a94d0eb2edf\") " pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198418 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nfxv\" (UniqueName: \"kubernetes.io/projected/469740fc-098b-4156-b459-02d7a1afefab-kube-api-access-2nfxv\") pod \"ingress-canary-wc5vj\" (UID: \"469740fc-098b-4156-b459-02d7a1afefab\") " pod="openshift-ingress-canary/ingress-canary-wc5vj" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198433 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8srld\" (UniqueName: \"kubernetes.io/projected/a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55-kube-api-access-8srld\") pod \"service-ca-operator-777779d784-fl66m\" (UID: \"a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl66m" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198455 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0057e5b1-8c91-43c4-86ed-337c6e69caf9-srv-cert\") pod \"catalog-operator-68c6474976-tnfvq\" (UID: \"0057e5b1-8c91-43c4-86ed-337c6e69caf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198472 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsl4q\" (UniqueName: \"kubernetes.io/projected/216a1f0f-785a-4dfa-b084-501b799637b7-kube-api-access-dsl4q\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198488 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55-serving-cert\") pod \"service-ca-operator-777779d784-fl66m\" (UID: \"a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl66m" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198505 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9fz2\" (UniqueName: \"kubernetes.io/projected/3ba1ee3d-4cef-4fc3-8c31-5f544dd56244-kube-api-access-q9fz2\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5xfm\" (UID: \"3ba1ee3d-4cef-4fc3-8c31-5f544dd56244\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5xfm" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198520 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6d0cf39-4835-4f5d-8c5a-9521331913ac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-txbq6\" (UID: \"d6d0cf39-4835-4f5d-8c5a-9521331913ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.198535 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f4115c67-25d3-4bdd-81ca-b63122b92fda-signing-cabundle\") pod \"service-ca-9c57cc56f-j5kgc\" (UID: \"f4115c67-25d3-4bdd-81ca-b63122b92fda\") " pod="openshift-service-ca/service-ca-9c57cc56f-j5kgc" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.199246 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f4115c67-25d3-4bdd-81ca-b63122b92fda-signing-cabundle\") pod \"service-ca-9c57cc56f-j5kgc\" (UID: \"f4115c67-25d3-4bdd-81ca-b63122b92fda\") " pod="openshift-service-ca/service-ca-9c57cc56f-j5kgc" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.199557 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a966345-1030-44c4-bf3a-6547e5d3aeda-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nxqw5\" (UID: \"3a966345-1030-44c4-bf3a-6547e5d3aeda\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" Jan 31 16:32:36 crc kubenswrapper[4730]: E0131 16:32:36.199639 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:36.699625655 +0000 UTC m=+143.505682571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.199861 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/216a1f0f-785a-4dfa-b084-501b799637b7-plugins-dir\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.200464 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55-config\") pod \"service-ca-operator-777779d784-fl66m\" (UID: \"a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl66m" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.201408 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/983dfbbb-8bc4-4935-b359-c885fc748600-trusted-ca\") pod \"ingress-operator-5b745b69d9-5k6fd\" (UID: \"983dfbbb-8bc4-4935-b359-c885fc748600\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.202104 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6f6285d-f680-4eec-ad4d-b9375b31bd21-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vcssw\" (UID: \"e6f6285d-f680-4eec-ad4d-b9375b31bd21\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.202279 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5625c912-fe62-4364-9ca3-006d0bfbd502-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qf5fm\" (UID: \"5625c912-fe62-4364-9ca3-006d0bfbd502\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qf5fm" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.202361 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/216a1f0f-785a-4dfa-b084-501b799637b7-registration-dir\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.203384 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ba1ee3d-4cef-4fc3-8c31-5f544dd56244-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5xfm\" (UID: \"3ba1ee3d-4cef-4fc3-8c31-5f544dd56244\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5xfm" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.204752 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c83601b9-c609-468f-8c2d-34a8a94e42d1-config\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.205561 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5104074c-31a4-4e5f-af89-97ad9a1ab8ad-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ks8gz\" (UID: \"5104074c-31a4-4e5f-af89-97ad9a1ab8ad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ks8gz" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.205627 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d6d0cf39-4835-4f5d-8c5a-9521331913ac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-txbq6\" (UID: \"d6d0cf39-4835-4f5d-8c5a-9521331913ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.206134 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c83601b9-c609-468f-8c2d-34a8a94e42d1-serving-cert\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.206627 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee6fccbc-e15d-4cbb-a200-b77420363b3f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc466\" (UID: \"ee6fccbc-e15d-4cbb-a200-b77420363b3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc466" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.207135 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c83601b9-c609-468f-8c2d-34a8a94e42d1-etcd-service-ca\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.207551 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c83601b9-c609-468f-8c2d-34a8a94e42d1-etcd-ca\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.208973 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3a966345-1030-44c4-bf3a-6547e5d3aeda-images\") pod \"machine-config-operator-74547568cd-nxqw5\" (UID: \"3a966345-1030-44c4-bf3a-6547e5d3aeda\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.209120 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b61a61bd-3aaa-42b6-9681-2945b18462c2-secret-volume\") pod \"collect-profiles-29497950-c6ftl\" (UID: \"b61a61bd-3aaa-42b6-9681-2945b18462c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.209418 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7798a0ca-0eb6-49e0-b531-e021ddbb7587-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bzzjv\" (UID: \"7798a0ca-0eb6-49e0-b531-e021ddbb7587\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bzzjv" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.209539 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/216a1f0f-785a-4dfa-b084-501b799637b7-socket-dir\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.210257 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e6f6285d-f680-4eec-ad4d-b9375b31bd21-proxy-tls\") pod \"machine-config-controller-84d6567774-vcssw\" (UID: \"e6f6285d-f680-4eec-ad4d-b9375b31bd21\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.210442 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e0ac8516-d776-4d92-933e-1d6a8d427d5f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ttmdv\" (UID: \"e0ac8516-d776-4d92-933e-1d6a8d427d5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.210462 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5625c912-fe62-4364-9ca3-006d0bfbd502-config\") pod \"kube-controller-manager-operator-78b949d7b-qf5fm\" (UID: \"5625c912-fe62-4364-9ca3-006d0bfbd502\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qf5fm" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.210711 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.210952 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fd7b5061-34b1-4b64-a7fc-1b4a0b70b366-apiservice-cert\") pod \"packageserver-d55dfcdfc-9hl7b\" (UID: \"fd7b5061-34b1-4b64-a7fc-1b4a0b70b366\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.211050 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b61a61bd-3aaa-42b6-9681-2945b18462c2-config-volume\") pod \"collect-profiles-29497950-c6ftl\" (UID: \"b61a61bd-3aaa-42b6-9681-2945b18462c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.211566 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6795c6e3-2333-4112-9ee7-b6074347208b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wql8z\" (UID: \"6795c6e3-2333-4112-9ee7-b6074347208b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wql8z" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.211684 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/887bb6af-277c-4837-b71a-6a94d0eb2edf-default-certificate\") pod \"router-default-5444994796-jwc2k\" (UID: \"887bb6af-277c-4837-b71a-6a94d0eb2edf\") " pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.211741 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ed0e8d6-c52f-421e-afc6-58098dfaf5a8-config-volume\") pod \"dns-default-5vcn4\" (UID: \"6ed0e8d6-c52f-421e-afc6-58098dfaf5a8\") " pod="openshift-dns/dns-default-5vcn4" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.211988 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e0ac8516-d776-4d92-933e-1d6a8d427d5f-srv-cert\") pod \"olm-operator-6b444d44fb-ttmdv\" (UID: \"e0ac8516-d776-4d92-933e-1d6a8d427d5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.212175 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/201151bb-7b5e-4564-ae1c-9b0b76e19778-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h6tq7\" (UID: \"201151bb-7b5e-4564-ae1c-9b0b76e19778\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6tq7" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.212262 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/887bb6af-277c-4837-b71a-6a94d0eb2edf-metrics-certs\") pod \"router-default-5444994796-jwc2k\" (UID: \"887bb6af-277c-4837-b71a-6a94d0eb2edf\") " pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.212757 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/201151bb-7b5e-4564-ae1c-9b0b76e19778-config\") pod \"kube-apiserver-operator-766d6c64bb-h6tq7\" (UID: \"201151bb-7b5e-4564-ae1c-9b0b76e19778\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6tq7" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.212923 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/216a1f0f-785a-4dfa-b084-501b799637b7-mountpoint-dir\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.213826 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/469740fc-098b-4156-b459-02d7a1afefab-cert\") pod \"ingress-canary-wc5vj\" (UID: \"469740fc-098b-4156-b459-02d7a1afefab\") " pod="openshift-ingress-canary/ingress-canary-wc5vj" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.214046 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee6fccbc-e15d-4cbb-a200-b77420363b3f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc466\" (UID: \"ee6fccbc-e15d-4cbb-a200-b77420363b3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc466" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.214156 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fd7b5061-34b1-4b64-a7fc-1b4a0b70b366-tmpfs\") pod \"packageserver-d55dfcdfc-9hl7b\" (UID: \"fd7b5061-34b1-4b64-a7fc-1b4a0b70b366\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.214260 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/216a1f0f-785a-4dfa-b084-501b799637b7-csi-data-dir\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.214614 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0057e5b1-8c91-43c4-86ed-337c6e69caf9-profile-collector-cert\") pod \"catalog-operator-68c6474976-tnfvq\" (UID: \"0057e5b1-8c91-43c4-86ed-337c6e69caf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.217785 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/887bb6af-277c-4837-b71a-6a94d0eb2edf-service-ca-bundle\") pod \"router-default-5444994796-jwc2k\" (UID: \"887bb6af-277c-4837-b71a-6a94d0eb2edf\") " pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.220036 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55-serving-cert\") pod \"service-ca-operator-777779d784-fl66m\" (UID: \"a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl66m" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.220991 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6d0cf39-4835-4f5d-8c5a-9521331913ac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-txbq6\" (UID: \"d6d0cf39-4835-4f5d-8c5a-9521331913ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.221445 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/887bb6af-277c-4837-b71a-6a94d0eb2edf-stats-auth\") pod \"router-default-5444994796-jwc2k\" (UID: \"887bb6af-277c-4837-b71a-6a94d0eb2edf\") " pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.221844 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0057e5b1-8c91-43c4-86ed-337c6e69caf9-srv-cert\") pod \"catalog-operator-68c6474976-tnfvq\" (UID: \"0057e5b1-8c91-43c4-86ed-337c6e69caf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.223762 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/817788cb-28d2-41a7-a5c8-b19287a6aa8b-node-bootstrap-token\") pod \"machine-config-server-nxpmk\" (UID: \"817788cb-28d2-41a7-a5c8-b19287a6aa8b\") " pod="openshift-machine-config-operator/machine-config-server-nxpmk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.224089 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a966345-1030-44c4-bf3a-6547e5d3aeda-proxy-tls\") pod \"machine-config-operator-74547568cd-nxqw5\" (UID: \"3a966345-1030-44c4-bf3a-6547e5d3aeda\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.224997 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6795c6e3-2333-4112-9ee7-b6074347208b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wql8z\" (UID: \"6795c6e3-2333-4112-9ee7-b6074347208b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wql8z" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.225445 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fd7b5061-34b1-4b64-a7fc-1b4a0b70b366-webhook-cert\") pod \"packageserver-d55dfcdfc-9hl7b\" (UID: \"fd7b5061-34b1-4b64-a7fc-1b4a0b70b366\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.227175 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f4115c67-25d3-4bdd-81ca-b63122b92fda-signing-key\") pod \"service-ca-9c57cc56f-j5kgc\" (UID: \"f4115c67-25d3-4bdd-81ca-b63122b92fda\") " pod="openshift-service-ca/service-ca-9c57cc56f-j5kgc" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.227483 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.228263 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c83601b9-c609-468f-8c2d-34a8a94e42d1-etcd-client\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.229790 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/983dfbbb-8bc4-4935-b359-c885fc748600-metrics-tls\") pod \"ingress-operator-5b745b69d9-5k6fd\" (UID: \"983dfbbb-8bc4-4935-b359-c885fc748600\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.230874 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ed0e8d6-c52f-421e-afc6-58098dfaf5a8-metrics-tls\") pod \"dns-default-5vcn4\" (UID: \"6ed0e8d6-c52f-421e-afc6-58098dfaf5a8\") " pod="openshift-dns/dns-default-5vcn4" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.235751 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/817788cb-28d2-41a7-a5c8-b19287a6aa8b-certs\") pod \"machine-config-server-nxpmk\" (UID: \"817788cb-28d2-41a7-a5c8-b19287a6aa8b\") " pod="openshift-machine-config-operator/machine-config-server-nxpmk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.280200 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs947\" (UniqueName: \"kubernetes.io/projected/dd2f6d82-d306-46dc-a938-2394b017b906-kube-api-access-zs947\") pod \"dns-operator-744455d44c-jmpc6\" (UID: \"dd2f6d82-d306-46dc-a938-2394b017b906\") " pod="openshift-dns-operator/dns-operator-744455d44c-jmpc6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.299933 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: E0131 16:32:36.300342 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:36.800332295 +0000 UTC m=+143.606389211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.301696 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24t4s\" (UniqueName: \"kubernetes.io/projected/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-kube-api-access-24t4s\") pod \"oauth-openshift-558db77b4-5kjkn\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.319459 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpwxr\" (UniqueName: \"kubernetes.io/projected/e8d1e83c-c1a5-4565-b1bc-454b416c6039-kube-api-access-jpwxr\") pod \"downloads-7954f5f757-2bcp4\" (UID: \"e8d1e83c-c1a5-4565-b1bc-454b416c6039\") " pod="openshift-console/downloads-7954f5f757-2bcp4" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.342498 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlw4r\" (UniqueName: \"kubernetes.io/projected/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-kube-api-access-zlw4r\") pod \"console-f9d7485db-6v2xk\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.359355 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2bcp4" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.365489 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jmpc6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.366674 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-bound-sa-token\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.380037 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl482\" (UniqueName: \"kubernetes.io/projected/5ce586ff-70fb-4890-9044-5693734e5d8e-kube-api-access-dl482\") pod \"openshift-controller-manager-operator-756b6f6bc6-grxdz\" (UID: \"5ce586ff-70fb-4890-9044-5693734e5d8e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grxdz" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.406395 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:36 crc kubenswrapper[4730]: E0131 16:32:36.406604 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:36.906578691 +0000 UTC m=+143.712635607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.406873 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: E0131 16:32:36.407226 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:36.90721243 +0000 UTC m=+143.713269436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.411207 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scf72\" (UniqueName: \"kubernetes.io/projected/d96093ab-8af5-4e3c-b89e-601cd9581b80-kube-api-access-scf72\") pod \"cluster-image-registry-operator-dc59b4c8b-rdbnh\" (UID: \"d96093ab-8af5-4e3c-b89e-601cd9581b80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.436098 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d96093ab-8af5-4e3c-b89e-601cd9581b80-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rdbnh\" (UID: \"d96093ab-8af5-4e3c-b89e-601cd9581b80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.449856 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c8w4\" (UniqueName: \"kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-kube-api-access-7c8w4\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.482296 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpcb2\" (UniqueName: \"kubernetes.io/projected/f2b47509-6f1d-40c5-94d7-10aa37fa5dce-kube-api-access-xpcb2\") pod \"openshift-config-operator-7777fb866f-ggbf6\" (UID: \"f2b47509-6f1d-40c5-94d7-10aa37fa5dce\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.504634 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xm5b\" (UniqueName: \"kubernetes.io/projected/0d637d59-da07-4756-8234-e17cba93e1b0-kube-api-access-6xm5b\") pod \"console-operator-58897d9998-28kdr\" (UID: \"0d637d59-da07-4756-8234-e17cba93e1b0\") " pod="openshift-console-operator/console-operator-58897d9998-28kdr" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.508280 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:36 crc kubenswrapper[4730]: E0131 16:32:36.508565 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:37.008542219 +0000 UTC m=+143.814599135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.508971 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: E0131 16:32:36.509336 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:37.009323452 +0000 UTC m=+143.815380368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.513643 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72vvl\" (UniqueName: \"kubernetes.io/projected/7798a0ca-0eb6-49e0-b531-e021ddbb7587-kube-api-access-72vvl\") pod \"package-server-manager-789f6589d5-bzzjv\" (UID: \"7798a0ca-0eb6-49e0-b531-e021ddbb7587\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bzzjv" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.529452 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sglh\" (UniqueName: \"kubernetes.io/projected/f4115c67-25d3-4bdd-81ca-b63122b92fda-kube-api-access-7sglh\") pod \"service-ca-9c57cc56f-j5kgc\" (UID: \"f4115c67-25d3-4bdd-81ca-b63122b92fda\") " pod="openshift-service-ca/service-ca-9c57cc56f-j5kgc" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.548021 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6795c6e3-2333-4112-9ee7-b6074347208b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wql8z\" (UID: \"6795c6e3-2333-4112-9ee7-b6074347208b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wql8z" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.558614 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.573941 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mwfp\" (UniqueName: \"kubernetes.io/projected/0057e5b1-8c91-43c4-86ed-337c6e69caf9-kube-api-access-5mwfp\") pod \"catalog-operator-68c6474976-tnfvq\" (UID: \"0057e5b1-8c91-43c4-86ed-337c6e69caf9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.581234 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5625c912-fe62-4364-9ca3-006d0bfbd502-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qf5fm\" (UID: \"5625c912-fe62-4364-9ca3-006d0bfbd502\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qf5fm" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.596436 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jmpc6"] Jan 31 16:32:36 crc kubenswrapper[4730]: W0131 16:32:36.602776 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd2f6d82_d306_46dc_a938_2394b017b906.slice/crio-7d4b7255c485a752069db405f8d4e258e0d45f803972ab1b720d664a953eca8d WatchSource:0}: Error finding container 7d4b7255c485a752069db405f8d4e258e0d45f803972ab1b720d664a953eca8d: Status 404 returned error can't find the container with id 7d4b7255c485a752069db405f8d4e258e0d45f803972ab1b720d664a953eca8d Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.608097 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-28kdr" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.609611 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:36 crc kubenswrapper[4730]: E0131 16:32:36.609723 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:37.109680432 +0000 UTC m=+143.915737348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.609861 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: E0131 16:32:36.610151 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:37.110142486 +0000 UTC m=+143.916199402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.610414 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct744\" (UniqueName: \"kubernetes.io/projected/3a966345-1030-44c4-bf3a-6547e5d3aeda-kube-api-access-ct744\") pod \"machine-config-operator-74547568cd-nxqw5\" (UID: \"3a966345-1030-44c4-bf3a-6547e5d3aeda\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.620048 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2hjv\" (UniqueName: \"kubernetes.io/projected/ee6fccbc-e15d-4cbb-a200-b77420363b3f-kube-api-access-g2hjv\") pod \"kube-storage-version-migrator-operator-b67b599dd-lc466\" (UID: \"ee6fccbc-e15d-4cbb-a200-b77420363b3f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc466" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.621939 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.641330 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.644946 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9njq\" (UniqueName: \"kubernetes.io/projected/c83601b9-c609-468f-8c2d-34a8a94e42d1-kube-api-access-q9njq\") pod \"etcd-operator-b45778765-cp5tf\" (UID: \"c83601b9-c609-468f-8c2d-34a8a94e42d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.652007 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.660081 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt7wh\" (UniqueName: \"kubernetes.io/projected/983dfbbb-8bc4-4935-b359-c885fc748600-kube-api-access-kt7wh\") pod \"ingress-operator-5b745b69d9-5k6fd\" (UID: \"983dfbbb-8bc4-4935-b359-c885fc748600\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.672281 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grxdz" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.678113 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.683904 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbnqd\" (UniqueName: \"kubernetes.io/projected/5104074c-31a4-4e5f-af89-97ad9a1ab8ad-kube-api-access-hbnqd\") pod \"multus-admission-controller-857f4d67dd-ks8gz\" (UID: \"5104074c-31a4-4e5f-af89-97ad9a1ab8ad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ks8gz" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.684414 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qf5fm" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.704238 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wql8z" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.708514 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pvj5\" (UniqueName: \"kubernetes.io/projected/817788cb-28d2-41a7-a5c8-b19287a6aa8b-kube-api-access-7pvj5\") pod \"machine-config-server-nxpmk\" (UID: \"817788cb-28d2-41a7-a5c8-b19287a6aa8b\") " pod="openshift-machine-config-operator/machine-config-server-nxpmk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.710483 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:36 crc kubenswrapper[4730]: E0131 16:32:36.710849 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:37.210833656 +0000 UTC m=+144.016890572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.730883 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bzzjv" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.731868 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmtpf\" (UniqueName: \"kubernetes.io/projected/6ed0e8d6-c52f-421e-afc6-58098dfaf5a8-kube-api-access-xmtpf\") pod \"dns-default-5vcn4\" (UID: \"6ed0e8d6-c52f-421e-afc6-58098dfaf5a8\") " pod="openshift-dns/dns-default-5vcn4" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.739060 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.749274 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.765508 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frn4n\" (UniqueName: \"kubernetes.io/projected/b61a61bd-3aaa-42b6-9681-2945b18462c2-kube-api-access-frn4n\") pod \"collect-profiles-29497950-c6ftl\" (UID: \"b61a61bd-3aaa-42b6-9681-2945b18462c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.772223 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" event={"ID":"222cebc4-19ee-44bb-9de4-da091e798019","Type":"ContainerStarted","Data":"408cea147e122361fa9c9347099a6add9c03ffcf4cf6bb7e48639d88e4d1fa40"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.772260 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" event={"ID":"222cebc4-19ee-44bb-9de4-da091e798019","Type":"ContainerStarted","Data":"1c6878b525b4084bec89e2086e1c1b8ffe01f9ec9823339856a81e6eb50305c3"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.773731 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" event={"ID":"47607256-aa97-41f0-9847-fdd1b79766ff","Type":"ContainerStarted","Data":"c0b61e7bcf70f46a984647238610dfa968d68e9aba3243bbf0d563f3f03fbb91"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.773755 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" event={"ID":"47607256-aa97-41f0-9847-fdd1b79766ff","Type":"ContainerStarted","Data":"eb6dbe89bd964f3ed683be0192eda20240f4ff6bcd87cdde83f4718b2408c4f7"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.777971 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjcxz" event={"ID":"81d169f8-a558-4b08-a62d-1e4079eb26e3","Type":"ContainerStarted","Data":"8e22fd7346eefc3894999adb7e55c0679b7f355ad9675c339645fc58f001f4dc"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.777999 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjcxz" event={"ID":"81d169f8-a558-4b08-a62d-1e4079eb26e3","Type":"ContainerStarted","Data":"43c8e6b90a5ea280ed4fcada59c98651bec589307321a8a0985be82dcbec95fd"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.786256 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9pj8k" event={"ID":"d853c17f-0402-432b-bdee-1c8df9fa0093","Type":"ContainerStarted","Data":"3955579b32b6186e1a59f5e5c98fa28bbaa90d5068383258d8f1eaa03ffe6e1d"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.786300 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9pj8k" event={"ID":"d853c17f-0402-432b-bdee-1c8df9fa0093","Type":"ContainerStarted","Data":"6e93693dbf958623de59843edaef7b2e41fbcf274f8d19ac5de93022529965a3"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.791110 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ks8gz" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.791851 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlhv4\" (UniqueName: \"kubernetes.io/projected/887bb6af-277c-4837-b71a-6a94d0eb2edf-kube-api-access-rlhv4\") pod \"router-default-5444994796-jwc2k\" (UID: \"887bb6af-277c-4837-b71a-6a94d0eb2edf\") " pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.791896 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whjtg\" (UniqueName: \"kubernetes.io/projected/e0ac8516-d776-4d92-933e-1d6a8d427d5f-kube-api-access-whjtg\") pod \"olm-operator-6b444d44fb-ttmdv\" (UID: \"e0ac8516-d776-4d92-933e-1d6a8d427d5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.792114 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" event={"ID":"9a029edf-d8ad-4314-9296-0f6c4f707330","Type":"ContainerStarted","Data":"e963c7be3147efa1683c9ca9afb5e065f2a4456787cec61bf6d9792299f131e5"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.792145 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" event={"ID":"9a029edf-d8ad-4314-9296-0f6c4f707330","Type":"ContainerStarted","Data":"cc56148f748c708e85e58803d1853d289e77c3d11a7271a7683324ee79749c40"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.792730 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.800317 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-j5kgc" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.800442 4730 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-w2n4l container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.800510 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" podUID="9a029edf-d8ad-4314-9296-0f6c4f707330" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.800820 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jmpc6" event={"ID":"dd2f6d82-d306-46dc-a938-2394b017b906","Type":"ContainerStarted","Data":"7d4b7255c485a752069db405f8d4e258e0d45f803972ab1b720d664a953eca8d"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.807648 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2bcp4"] Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.809197 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" event={"ID":"a4b96638-d5c4-43d4-ab38-15972a55d0f4","Type":"ContainerStarted","Data":"0a5caa75043f96e14a205a902ded5152664c71cac03b35f52a64ba295b6f0bd1"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.809233 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" event={"ID":"a4b96638-d5c4-43d4-ab38-15972a55d0f4","Type":"ContainerStarted","Data":"6a252051842af1e2932c913da5622a6d4237c6bc6c7acc41f823d6016a3c4266"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.809678 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.810732 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc466" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.810773 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" event={"ID":"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5","Type":"ContainerStarted","Data":"32479a8c7273661a71d34062c84bd01fa08272b2b4090050766b04cea6c07258"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.811100 4730 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ml2ls container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.811127 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" podUID="a4b96638-d5c4-43d4-ab38-15972a55d0f4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.811463 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:36 crc kubenswrapper[4730]: E0131 16:32:36.811768 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:37.311758072 +0000 UTC m=+144.117814988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.812982 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42sz6\" (UniqueName: \"kubernetes.io/projected/edec59e3-15cb-4032-a5a9-e25e12cc6e9e-kube-api-access-42sz6\") pod \"migrator-59844c95c7-ntvr6\" (UID: \"edec59e3-15cb-4032-a5a9-e25e12cc6e9e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ntvr6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.817065 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" event={"ID":"1b2b6c9a-5a3c-4325-be55-3ba2718191ce","Type":"ContainerStarted","Data":"47304caf3ce903433cd50b63e2736891f05b30304bdad08b4161eef8d513e12a"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.817092 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" event={"ID":"1b2b6c9a-5a3c-4325-be55-3ba2718191ce","Type":"ContainerStarted","Data":"2dec1ab0274cd2e579cb76ea7c2c846da62cba69924fcafcd2805bbdd1f31f9f"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.817100 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" event={"ID":"1b2b6c9a-5a3c-4325-be55-3ba2718191ce","Type":"ContainerStarted","Data":"49a04c3d9538dec1c21cfc8b6fa45b29b0bde426fb721e87cfa3196f9903ba2a"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.818367 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" event={"ID":"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3","Type":"ContainerStarted","Data":"d056fe093f94a939a4e8ccb4477254ef410be94b78603cb91fc32d4a8ed21584"} Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.818465 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.818639 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5kjkn"] Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.825207 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5vcn4" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.829910 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wq8f\" (UniqueName: \"kubernetes.io/projected/d6d0cf39-4835-4f5d-8c5a-9521331913ac-kube-api-access-7wq8f\") pod \"marketplace-operator-79b997595-txbq6\" (UID: \"d6d0cf39-4835-4f5d-8c5a-9521331913ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.839985 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/201151bb-7b5e-4564-ae1c-9b0b76e19778-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h6tq7\" (UID: \"201151bb-7b5e-4564-ae1c-9b0b76e19778\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6tq7" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.860150 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nxpmk" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.862496 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v58vs\" (UniqueName: \"kubernetes.io/projected/e6f6285d-f680-4eec-ad4d-b9375b31bd21-kube-api-access-v58vs\") pod \"machine-config-controller-84d6567774-vcssw\" (UID: \"e6f6285d-f680-4eec-ad4d-b9375b31bd21\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.881154 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9fz2\" (UniqueName: \"kubernetes.io/projected/3ba1ee3d-4cef-4fc3-8c31-5f544dd56244-kube-api-access-q9fz2\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5xfm\" (UID: \"3ba1ee3d-4cef-4fc3-8c31-5f544dd56244\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5xfm" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.900967 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/983dfbbb-8bc4-4935-b359-c885fc748600-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5k6fd\" (UID: \"983dfbbb-8bc4-4935-b359-c885fc748600\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.912852 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:36 crc kubenswrapper[4730]: E0131 16:32:36.913124 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:37.413096592 +0000 UTC m=+144.219153508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.934496 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-28kdr"] Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.936718 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nfxv\" (UniqueName: \"kubernetes.io/projected/469740fc-098b-4156-b459-02d7a1afefab-kube-api-access-2nfxv\") pod \"ingress-canary-wc5vj\" (UID: \"469740fc-098b-4156-b459-02d7a1afefab\") " pod="openshift-ingress-canary/ingress-canary-wc5vj" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.950245 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8srld\" (UniqueName: \"kubernetes.io/projected/a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55-kube-api-access-8srld\") pod \"service-ca-operator-777779d784-fl66m\" (UID: \"a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl66m" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.975354 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsl4q\" (UniqueName: \"kubernetes.io/projected/216a1f0f-785a-4dfa-b084-501b799637b7-kube-api-access-dsl4q\") pod \"csi-hostpathplugin-gj2x5\" (UID: \"216a1f0f-785a-4dfa-b084-501b799637b7\") " pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.991221 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6tq7" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.993699 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwh6w\" (UniqueName: \"kubernetes.io/projected/fd7b5061-34b1-4b64-a7fc-1b4a0b70b366-kube-api-access-dwh6w\") pod \"packageserver-d55dfcdfc-9hl7b\" (UID: \"fd7b5061-34b1-4b64-a7fc-1b4a0b70b366\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" Jan 31 16:32:36 crc kubenswrapper[4730]: I0131 16:32:36.997045 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.011080 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ntvr6" Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.015143 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:37 crc kubenswrapper[4730]: E0131 16:32:37.015543 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:37.515499273 +0000 UTC m=+144.321556179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.017671 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5xfm" Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.023431 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw" Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.042382 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl66m" Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.055577 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.069986 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.074494 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.080120 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.124447 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:37 crc kubenswrapper[4730]: E0131 16:32:37.125091 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:37.625074137 +0000 UTC m=+144.431131053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.145919 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.152065 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wc5vj" Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.186493 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bzzjv"] Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.212149 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qf5fm"] Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.227086 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:37 crc kubenswrapper[4730]: E0131 16:32:37.227467 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:37.727456288 +0000 UTC m=+144.533513204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.273240 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cp5tf"] Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.327712 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:37 crc kubenswrapper[4730]: E0131 16:32:37.327888 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:37.827860049 +0000 UTC m=+144.633916965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.327997 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:37 crc kubenswrapper[4730]: E0131 16:32:37.328465 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:37.828455907 +0000 UTC m=+144.634512823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:37 crc kubenswrapper[4730]: W0131 16:32:37.417397 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7798a0ca_0eb6_49e0_b531_e021ddbb7587.slice/crio-39750ccb1b8a70c478697afa799089658442a6607d24295c42bfa12005e80713 WatchSource:0}: Error finding container 39750ccb1b8a70c478697afa799089658442a6607d24295c42bfa12005e80713: Status 404 returned error can't find the container with id 39750ccb1b8a70c478697afa799089658442a6607d24295c42bfa12005e80713 Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.429665 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:37 crc kubenswrapper[4730]: E0131 16:32:37.430047 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:37.930029733 +0000 UTC m=+144.736086649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.431098 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" podStartSLOduration=122.431079284 podStartE2EDuration="2m2.431079284s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:37.409817591 +0000 UTC m=+144.215874507" watchObservedRunningTime="2026-01-31 16:32:37.431079284 +0000 UTC m=+144.237136200" Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.431242 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5"] Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.463410 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-vk49s" podStartSLOduration=122.463393987 podStartE2EDuration="2m2.463393987s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:37.460448169 +0000 UTC m=+144.266505085" watchObservedRunningTime="2026-01-31 16:32:37.463393987 +0000 UTC m=+144.269450903" Jan 31 16:32:37 crc kubenswrapper[4730]: W0131 16:32:37.526191 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc83601b9_c609_468f_8c2d_34a8a94e42d1.slice/crio-227cfb12466bbf06d7bfaa6717090d9a40724d21b4fdae83a31c7d943833f145 WatchSource:0}: Error finding container 227cfb12466bbf06d7bfaa6717090d9a40724d21b4fdae83a31c7d943833f145: Status 404 returned error can't find the container with id 227cfb12466bbf06d7bfaa6717090d9a40724d21b4fdae83a31c7d943833f145 Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.530881 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:37 crc kubenswrapper[4730]: E0131 16:32:37.531150 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:38.031139975 +0000 UTC m=+144.837196891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.632042 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:37 crc kubenswrapper[4730]: E0131 16:32:37.632469 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:38.132452874 +0000 UTC m=+144.938509790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.733075 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:37 crc kubenswrapper[4730]: E0131 16:32:37.734323 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:38.234308969 +0000 UTC m=+145.040365885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.834176 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:37 crc kubenswrapper[4730]: E0131 16:32:37.834536 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:38.334520724 +0000 UTC m=+145.140577640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.861020 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jmpc6" event={"ID":"dd2f6d82-d306-46dc-a938-2394b017b906","Type":"ContainerStarted","Data":"b418dcc4e2766f2530f75b62e3dbe313d0b8e07d24af7bec16d54b6c3870e3aa"} Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.885056 4730 generic.go:334] "Generic (PLEG): container finished" podID="5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3" containerID="4b4ec9a8f060402ab13706051c335c3373acf2b55cdc24555b133cf5d3b138f3" exitCode=0 Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.885160 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" event={"ID":"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3","Type":"ContainerDied","Data":"4b4ec9a8f060402ab13706051c335c3373acf2b55cdc24555b133cf5d3b138f3"} Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.938753 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:37 crc kubenswrapper[4730]: E0131 16:32:37.939198 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:38.439178572 +0000 UTC m=+145.245235488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.973835 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" event={"ID":"c83601b9-c609-468f-8c2d-34a8a94e42d1","Type":"ContainerStarted","Data":"227cfb12466bbf06d7bfaa6717090d9a40724d21b4fdae83a31c7d943833f145"} Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.975584 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-28kdr" event={"ID":"0d637d59-da07-4756-8234-e17cba93e1b0","Type":"ContainerStarted","Data":"66601b1c16562fa5f720298312da124d83840dd24946f53ad9ec279fb448a054"} Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.976470 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" event={"ID":"3a966345-1030-44c4-bf3a-6547e5d3aeda","Type":"ContainerStarted","Data":"ba0f33ca29e2cd06254d3fc39f13c7613a359d6e77e6628fb1696b603bb3cc97"} Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.977292 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qf5fm" event={"ID":"5625c912-fe62-4364-9ca3-006d0bfbd502","Type":"ContainerStarted","Data":"8992d609d9e37a08821b95b158c49ba0bde91f86feaa3594f7670df403d1b7c0"} Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.978046 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" event={"ID":"f3e4348b-10b3-482a-a64d-4c2bfe52fb69","Type":"ContainerStarted","Data":"028623e425929302f815c3bfca034607c0890b76a221aea0f3052f131b64fc37"} Jan 31 16:32:37 crc kubenswrapper[4730]: I0131 16:32:37.978785 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2bcp4" event={"ID":"e8d1e83c-c1a5-4565-b1bc-454b416c6039","Type":"ContainerStarted","Data":"2982a0233ce4927dcc990f13fb94011179ddf9525d3434d693f79fc6644979c7"} Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.004946 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq"] Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.015010 4730 generic.go:334] "Generic (PLEG): container finished" podID="d4524a04-3cf1-48b4-9af1-ca47b1edf9e5" containerID="00860120e6f875d9cc0629006b942e1da86b4236dcc5258baa278f0e46a7d24b" exitCode=0 Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.015902 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" event={"ID":"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5","Type":"ContainerDied","Data":"00860120e6f875d9cc0629006b942e1da86b4236dcc5258baa278f0e46a7d24b"} Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.041231 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:38 crc kubenswrapper[4730]: E0131 16:32:38.041636 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:38.541619854 +0000 UTC m=+145.347676760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.090410 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9pj8k" event={"ID":"d853c17f-0402-432b-bdee-1c8df9fa0093","Type":"ContainerStarted","Data":"3fa049b725e43dbc838d65cfbdb2bbd7a4a95daf9c724577cffb09ddada93a8a"} Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.109936 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" podStartSLOduration=123.109917929 podStartE2EDuration="2m3.109917929s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:38.097270432 +0000 UTC m=+144.903327368" watchObservedRunningTime="2026-01-31 16:32:38.109917929 +0000 UTC m=+144.915974845" Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.142342 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:38 crc kubenswrapper[4730]: E0131 16:32:38.143796 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:38.643784878 +0000 UTC m=+145.449841794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.169723 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jwc2k" event={"ID":"887bb6af-277c-4837-b71a-6a94d0eb2edf","Type":"ContainerStarted","Data":"c878047f6bb9673b2b875c77c5c610fca47d59bd009337536b8cf2b0f0083cc8"} Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.247858 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:38 crc kubenswrapper[4730]: E0131 16:32:38.247953 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:38.747928631 +0000 UTC m=+145.553985547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.248224 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:38 crc kubenswrapper[4730]: E0131 16:32:38.248574 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:38.74856215 +0000 UTC m=+145.554619066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.252668 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" event={"ID":"222cebc4-19ee-44bb-9de4-da091e798019","Type":"ContainerStarted","Data":"00fecee6f414146cb62d82b076abc05fc529e232a779a9aea69c5a6b31190c49"} Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.335136 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bzzjv" event={"ID":"7798a0ca-0eb6-49e0-b531-e021ddbb7587","Type":"ContainerStarted","Data":"39750ccb1b8a70c478697afa799089658442a6607d24295c42bfa12005e80713"} Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.339409 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc466"] Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.360404 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:38 crc kubenswrapper[4730]: E0131 16:32:38.361657 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:38.861640689 +0000 UTC m=+145.667697605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.373295 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nxpmk" event={"ID":"817788cb-28d2-41a7-a5c8-b19287a6aa8b","Type":"ContainerStarted","Data":"a62ba236ea99d67a82c9453587642f6bfc1a9bcd6a643ba8a40622fd5708c603"} Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.410827 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.411028 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.461939 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:38 crc kubenswrapper[4730]: E0131 16:32:38.464484 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:38.964468343 +0000 UTC m=+145.770525259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.473429 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh"] Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.473763 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjcxz" podStartSLOduration=123.473745289 podStartE2EDuration="2m3.473745289s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:38.459157674 +0000 UTC m=+145.265214590" watchObservedRunningTime="2026-01-31 16:32:38.473745289 +0000 UTC m=+145.279802205" Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.563920 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:38 crc kubenswrapper[4730]: E0131 16:32:38.564530 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:39.064511903 +0000 UTC m=+145.870568819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.654982 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-frj85" podStartSLOduration=123.654949217 podStartE2EDuration="2m3.654949217s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:38.643410623 +0000 UTC m=+145.449467539" watchObservedRunningTime="2026-01-31 16:32:38.654949217 +0000 UTC m=+145.461006133" Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.670661 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:38 crc kubenswrapper[4730]: E0131 16:32:38.677286 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:39.177268342 +0000 UTC m=+145.983325258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.774043 4730 csr.go:261] certificate signing request csr-lxft4 is approved, waiting to be issued Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.777070 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:38 crc kubenswrapper[4730]: E0131 16:32:38.777348 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:39.277333403 +0000 UTC m=+146.083390319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.789354 4730 csr.go:257] certificate signing request csr-lxft4 is issued Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.790769 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5vcn4"] Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.790789 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grxdz"] Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.790815 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6"] Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.790825 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6v2xk"] Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.790834 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ks8gz"] Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.790849 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd"] Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.790861 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wql8z"] Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.790870 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-j5kgc"] Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.883394 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv"] Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.884031 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:38 crc kubenswrapper[4730]: E0131 16:32:38.884382 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:39.384370062 +0000 UTC m=+146.190426978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.986776 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:38 crc kubenswrapper[4730]: E0131 16:32:38.986975 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:39.486949398 +0000 UTC m=+146.293006314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:38 crc kubenswrapper[4730]: I0131 16:32:38.987127 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:38 crc kubenswrapper[4730]: E0131 16:32:38.987595 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:39.487582987 +0000 UTC m=+146.293639893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.002753 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9pj8k" podStartSLOduration=124.002724298 podStartE2EDuration="2m4.002724298s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:38.978138606 +0000 UTC m=+145.784195522" watchObservedRunningTime="2026-01-31 16:32:39.002724298 +0000 UTC m=+145.808781214" Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.004101 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wzp9m" podStartSLOduration=124.004097129 podStartE2EDuration="2m4.004097129s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:39.000334657 +0000 UTC m=+145.806391583" watchObservedRunningTime="2026-01-31 16:32:39.004097129 +0000 UTC m=+145.810154045" Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.095656 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b"] Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.102487 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:39 crc kubenswrapper[4730]: E0131 16:32:39.102603 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:39.602586103 +0000 UTC m=+146.408643019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.102741 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:39 crc kubenswrapper[4730]: E0131 16:32:39.103109 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:39.603101479 +0000 UTC m=+146.409158395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.203269 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:39 crc kubenswrapper[4730]: E0131 16:32:39.203633 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:39.703618653 +0000 UTC m=+146.509675569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.305829 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:39 crc kubenswrapper[4730]: E0131 16:32:39.306141 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:39.806130478 +0000 UTC m=+146.612187394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.309936 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5xfm"] Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.314301 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6tq7"] Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.358395 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gj2x5"] Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.406450 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:39 crc kubenswrapper[4730]: E0131 16:32:39.407335 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:39.907306192 +0000 UTC m=+146.713363108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.461675 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ntvr6"] Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.496331 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw"] Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.502170 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh" event={"ID":"d96093ab-8af5-4e3c-b89e-601cd9581b80","Type":"ContainerStarted","Data":"ffb27943939e3bde78e0deb4dee2cc05d2bdf45d15fcdab58007305d6249face"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.513099 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:39 crc kubenswrapper[4730]: E0131 16:32:39.513543 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:40.013532367 +0000 UTC m=+146.819589283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:39 crc kubenswrapper[4730]: W0131 16:32:39.538079 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6f6285d_f680_4eec_ad4d_b9375b31bd21.slice/crio-946f1460edbffad55b9342e6a9de936d4c857e3c3197a177e734558f9c2bc26b WatchSource:0}: Error finding container 946f1460edbffad55b9342e6a9de936d4c857e3c3197a177e734558f9c2bc26b: Status 404 returned error can't find the container with id 946f1460edbffad55b9342e6a9de936d4c857e3c3197a177e734558f9c2bc26b Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.564433 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wc5vj"] Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.568222 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fl66m"] Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.582269 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2bcp4" event={"ID":"e8d1e83c-c1a5-4565-b1bc-454b416c6039","Type":"ContainerStarted","Data":"bda481bb20cafb783f80d7b28fa5e4903dddd779d62bb4a9c3b6a848f4a5d6fb"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.582305 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2bcp4" Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.583330 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-txbq6"] Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.584097 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bcp4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.584154 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2bcp4" podUID="e8d1e83c-c1a5-4565-b1bc-454b416c6039" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.618686 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:39 crc kubenswrapper[4730]: E0131 16:32:39.620182 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:40.120146993 +0000 UTC m=+146.926203909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.623857 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jmpc6" event={"ID":"dd2f6d82-d306-46dc-a938-2394b017b906","Type":"ContainerStarted","Data":"fc36a4a025ff42ab6da8b3a798471660b87813bb0dcd36141ab98e8dcf34d9e0"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.638425 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl"] Jan 31 16:32:39 crc kubenswrapper[4730]: W0131 16:32:39.647086 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod201151bb_7b5e_4564_ae1c_9b0b76e19778.slice/crio-8b0ccda6c0fa8cbd32fb520c119aa2acda92d9e389d0a0a965e46b58e260d0ec WatchSource:0}: Error finding container 8b0ccda6c0fa8cbd32fb520c119aa2acda92d9e389d0a0a965e46b58e260d0ec: Status 404 returned error can't find the container with id 8b0ccda6c0fa8cbd32fb520c119aa2acda92d9e389d0a0a965e46b58e260d0ec Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.649238 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jwc2k" event={"ID":"887bb6af-277c-4837-b71a-6a94d0eb2edf","Type":"ContainerStarted","Data":"5f06034bf521b1f81ee60af3645daf63417cf2c8765ad92c7d7580b1228ff0fe"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.661489 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" event={"ID":"3a966345-1030-44c4-bf3a-6547e5d3aeda","Type":"ContainerStarted","Data":"bc87be6b6e6072a3213af2dbee45a539869bb4d53e1232a11d179e24b2663831"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.681090 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-j5kgc" event={"ID":"f4115c67-25d3-4bdd-81ca-b63122b92fda","Type":"ContainerStarted","Data":"fd92d7248e39df6b096197cbbe06ecdebb57213cec3f573667135296e014852f"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.713594 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grxdz" event={"ID":"5ce586ff-70fb-4890-9044-5693734e5d8e","Type":"ContainerStarted","Data":"8347fb833d97802eefca4258ce446b58f9910a2b3f3dbb7d4a8c69b9731f1e20"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.722656 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:39 crc kubenswrapper[4730]: E0131 16:32:39.727156 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:40.227140961 +0000 UTC m=+147.033197867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.728457 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" event={"ID":"fd7b5061-34b1-4b64-a7fc-1b4a0b70b366","Type":"ContainerStarted","Data":"6f3c9f2c369bb152684bdbcdade28ab210a4fbda2831b7a746f1e7d1814e9d3f"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.734271 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv" event={"ID":"e0ac8516-d776-4d92-933e-1d6a8d427d5f","Type":"ContainerStarted","Data":"1ad3f6b8cba537eb1770da17f680e4f0bf99613d7d1118269c9c5912d699c9f5"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.738494 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" event={"ID":"f2b47509-6f1d-40c5-94d7-10aa37fa5dce","Type":"ContainerStarted","Data":"1087cbabf72e4e40d97f779c48da5fc089a5fd176ee6a723472fe3782992a5db"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.739779 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nxpmk" event={"ID":"817788cb-28d2-41a7-a5c8-b19287a6aa8b","Type":"ContainerStarted","Data":"f4dd9fe0b810d37064e09947e5783a85990942656b90870be85075e033d6d3d7"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.741598 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ks8gz" event={"ID":"5104074c-31a4-4e5f-af89-97ad9a1ab8ad","Type":"ContainerStarted","Data":"8697a7ae7228b263f0a617d60ca6eabf941ddbeed037d716463a94f519f29803"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.749670 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5vcn4" event={"ID":"6ed0e8d6-c52f-421e-afc6-58098dfaf5a8","Type":"ContainerStarted","Data":"955a446decbfb970eaf22106bedab1e21f1100756027224e1e6b088fbd7649f6"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.753958 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-jwc2k" podStartSLOduration=124.75394352 podStartE2EDuration="2m4.75394352s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:39.753692562 +0000 UTC m=+146.559749468" watchObservedRunningTime="2026-01-31 16:32:39.75394352 +0000 UTC m=+146.560000436" Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.769790 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-28kdr" event={"ID":"0d637d59-da07-4756-8234-e17cba93e1b0","Type":"ContainerStarted","Data":"d9ac62064e01535b658224d8e8c86ad69a4390bad7ff0db1c75d3ae55950a4e4"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.770470 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-28kdr" Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.776768 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6v2xk" event={"ID":"8100d0f3-9c7f-4835-b98a-c79cc76c29ef","Type":"ContainerStarted","Data":"4f31d040924df93618ff60ca51aea1dccb98144352f6fe4a04eeb38de3651fc6"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.777777 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" event={"ID":"983dfbbb-8bc4-4935-b359-c885fc748600","Type":"ContainerStarted","Data":"d79abe13eea43cfa7f659e0a4158b0e2792b0d1784f96f0a9086cf58e28ddffc"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.778896 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" event={"ID":"f3e4348b-10b3-482a-a64d-4c2bfe52fb69","Type":"ContainerStarted","Data":"058560941bb3a4738c3a9fd3545d91222cf1f162b3cc11d63f1f9758230534ed"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.779570 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.782938 4730 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5kjkn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.782996 4730 patch_prober.go:28] interesting pod/console-operator-58897d9998-28kdr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.783003 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" podUID="f3e4348b-10b3-482a-a64d-4c2bfe52fb69" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.783027 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-28kdr" podUID="0d637d59-da07-4756-8234-e17cba93e1b0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.786859 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" event={"ID":"0057e5b1-8c91-43c4-86ed-337c6e69caf9","Type":"ContainerStarted","Data":"ff4ab80a058326ec3c59982371651ce0d7150be06e41a70f7b2e628c294bd639"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.787250 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.790254 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-31 16:27:38 +0000 UTC, rotation deadline is 2026-11-16 23:52:18.796108113 +0000 UTC Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.790298 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6943h19m39.005828931s for next certificate rotation Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.790630 4730 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-tnfvq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.790672 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" podUID="0057e5b1-8c91-43c4-86ed-337c6e69caf9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.796858 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc466" event={"ID":"ee6fccbc-e15d-4cbb-a200-b77420363b3f","Type":"ContainerStarted","Data":"7911b5b321bd6283f888d5b3cfd762e61c246d67e7f297d96d0ba57cf7e37d81"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.800484 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2bcp4" podStartSLOduration=124.800467206 podStartE2EDuration="2m4.800467206s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:39.798039463 +0000 UTC m=+146.604096379" watchObservedRunningTime="2026-01-31 16:32:39.800467206 +0000 UTC m=+146.606524122" Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.824199 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:39 crc kubenswrapper[4730]: E0131 16:32:39.825432 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:40.325415959 +0000 UTC m=+147.131472875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.831093 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-jmpc6" podStartSLOduration=124.831077188 podStartE2EDuration="2m4.831077188s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:39.826544403 +0000 UTC m=+146.632601319" watchObservedRunningTime="2026-01-31 16:32:39.831077188 +0000 UTC m=+146.637134104" Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.845253 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bzzjv" event={"ID":"7798a0ca-0eb6-49e0-b531-e021ddbb7587","Type":"ContainerStarted","Data":"9bce27c7fdddef40dc39bc555dabbb1d18ce04a8c7489186a0bec609b5251a0d"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.845894 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bzzjv" Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.856566 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wql8z" event={"ID":"6795c6e3-2333-4112-9ee7-b6074347208b","Type":"ContainerStarted","Data":"e051141d408b7ff3c000f51aa1a780341a4df352d9d3192bada39b28e4748f84"} Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.927563 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:39 crc kubenswrapper[4730]: E0131 16:32:39.928256 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:40.428218032 +0000 UTC m=+147.234274948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.932099 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-28kdr" podStartSLOduration=124.932066416 podStartE2EDuration="2m4.932066416s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:39.867773701 +0000 UTC m=+146.673830617" watchObservedRunningTime="2026-01-31 16:32:39.932066416 +0000 UTC m=+146.738123332" Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.932463 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" podStartSLOduration=124.932458508 podStartE2EDuration="2m4.932458508s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:39.93118757 +0000 UTC m=+146.737244486" watchObservedRunningTime="2026-01-31 16:32:39.932458508 +0000 UTC m=+146.738515424" Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.965444 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-nxpmk" podStartSLOduration=5.96542776 podStartE2EDuration="5.96542776s" podCreationTimestamp="2026-01-31 16:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:39.964281506 +0000 UTC m=+146.770338432" watchObservedRunningTime="2026-01-31 16:32:39.96542776 +0000 UTC m=+146.771484676" Jan 31 16:32:39 crc kubenswrapper[4730]: I0131 16:32:39.995300 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" podStartSLOduration=124.99528271 podStartE2EDuration="2m4.99528271s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:39.995141586 +0000 UTC m=+146.801198492" watchObservedRunningTime="2026-01-31 16:32:39.99528271 +0000 UTC m=+146.801339626" Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.041387 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:40 crc kubenswrapper[4730]: E0131 16:32:40.052391 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:40.552367501 +0000 UTC m=+147.358424417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.064092 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc466" podStartSLOduration=125.064068709 podStartE2EDuration="2m5.064068709s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:40.040531898 +0000 UTC m=+146.846588814" watchObservedRunningTime="2026-01-31 16:32:40.064068709 +0000 UTC m=+146.870125665" Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.074896 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.086234 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.086290 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.093496 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bzzjv" podStartSLOduration=125.093482396 podStartE2EDuration="2m5.093482396s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:40.090715753 +0000 UTC m=+146.896772669" watchObservedRunningTime="2026-01-31 16:32:40.093482396 +0000 UTC m=+146.899539312" Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.162610 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:40 crc kubenswrapper[4730]: E0131 16:32:40.163386 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:40.663374158 +0000 UTC m=+147.469431074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.264111 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:40 crc kubenswrapper[4730]: E0131 16:32:40.264900 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:40.764883082 +0000 UTC m=+147.570939998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.371949 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:40 crc kubenswrapper[4730]: E0131 16:32:40.372304 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:40.872277952 +0000 UTC m=+147.678334868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.472830 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:40 crc kubenswrapper[4730]: E0131 16:32:40.473181 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:40.973166318 +0000 UTC m=+147.779223234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.575086 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:40 crc kubenswrapper[4730]: E0131 16:32:40.575398 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:41.075386343 +0000 UTC m=+147.881443259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.676602 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:40 crc kubenswrapper[4730]: E0131 16:32:40.677047 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:41.177014381 +0000 UTC m=+147.983071307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.677312 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:40 crc kubenswrapper[4730]: E0131 16:32:40.677628 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:41.177620019 +0000 UTC m=+147.983676925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.777900 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:40 crc kubenswrapper[4730]: E0131 16:32:40.778287 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:41.278271388 +0000 UTC m=+148.084328304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.879433 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:40 crc kubenswrapper[4730]: E0131 16:32:40.879705 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:41.3796928 +0000 UTC m=+148.185749716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.880013 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grxdz" event={"ID":"5ce586ff-70fb-4890-9044-5693734e5d8e","Type":"ContainerStarted","Data":"203ff3ea6f00655ac6f0355526b6b75deebbf8c19039d4e4d84832eca3327346"} Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.882615 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5xfm" event={"ID":"3ba1ee3d-4cef-4fc3-8c31-5f544dd56244","Type":"ContainerStarted","Data":"bc237fd27dedc836f006da99eca4c5eb3fe48a4e01867b1805932c26ad51075c"} Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.884203 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" event={"ID":"d4524a04-3cf1-48b4-9af1-ca47b1edf9e5","Type":"ContainerStarted","Data":"6ad565b25b2fa1697f70d39ce9da3615fb628790a00790282de7627475ba374a"} Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.889452 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw" event={"ID":"e6f6285d-f680-4eec-ad4d-b9375b31bd21","Type":"ContainerStarted","Data":"946f1460edbffad55b9342e6a9de936d4c857e3c3197a177e734558f9c2bc26b"} Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.905091 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qf5fm" event={"ID":"5625c912-fe62-4364-9ca3-006d0bfbd502","Type":"ContainerStarted","Data":"b9285c6fdf6357696972fe7dc47bdf23e0cd211d263684c27209db86ac793441"} Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.933133 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" event={"ID":"b61a61bd-3aaa-42b6-9681-2945b18462c2","Type":"ContainerStarted","Data":"ab5d64ae10400ba0b9491f8991adc5a601b3532bafc3e3e123b49da1929b68d9"} Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.933191 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" event={"ID":"b61a61bd-3aaa-42b6-9681-2945b18462c2","Type":"ContainerStarted","Data":"5572516bea8b91813f7e0ae490bcc32c4fe309631d8ca91fba4b806d1c108fb3"} Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.934574 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grxdz" podStartSLOduration=125.934535073 podStartE2EDuration="2m5.934535073s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:40.933916115 +0000 UTC m=+147.739973031" watchObservedRunningTime="2026-01-31 16:32:40.934535073 +0000 UTC m=+147.740591989" Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.957704 4730 generic.go:334] "Generic (PLEG): container finished" podID="f2b47509-6f1d-40c5-94d7-10aa37fa5dce" containerID="bc4f00df28e8f37ea95554d097a9d663572ab9c577eb95d26eb3e88b059a2b39" exitCode=0 Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.958627 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" event={"ID":"f2b47509-6f1d-40c5-94d7-10aa37fa5dce","Type":"ContainerDied","Data":"bc4f00df28e8f37ea95554d097a9d663572ab9c577eb95d26eb3e88b059a2b39"} Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.972427 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6tq7" event={"ID":"201151bb-7b5e-4564-ae1c-9b0b76e19778","Type":"ContainerStarted","Data":"8b0ccda6c0fa8cbd32fb520c119aa2acda92d9e389d0a0a965e46b58e260d0ec"} Jan 31 16:32:40 crc kubenswrapper[4730]: I0131 16:32:40.983230 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:40 crc kubenswrapper[4730]: E0131 16:32:40.984470 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:41.484454771 +0000 UTC m=+148.290511687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.005418 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" event={"ID":"3a966345-1030-44c4-bf3a-6547e5d3aeda","Type":"ContainerStarted","Data":"65ba441f95be24efd9521127cbf907310fc54620a8fd14c09659a665a56daec1"} Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.034035 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-j5kgc" event={"ID":"f4115c67-25d3-4bdd-81ca-b63122b92fda","Type":"ContainerStarted","Data":"66b4aab4126ee9d20e242ef7e764af4446ccbd982e9152793b3162d1f31e16da"} Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.047987 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" event={"ID":"c83601b9-c609-468f-8c2d-34a8a94e42d1","Type":"ContainerStarted","Data":"c3f1ed37a41029080e205ac2f68ad6e785e96c7550142408a0e3eaa4869aa859"} Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.084451 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:41 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:41 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:41 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.084501 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.085831 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:41 crc kubenswrapper[4730]: E0131 16:32:41.090457 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:41.590445709 +0000 UTC m=+148.396502625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.105060 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" podStartSLOduration=126.105044994 podStartE2EDuration="2m6.105044994s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:41.103794566 +0000 UTC m=+147.909851482" watchObservedRunningTime="2026-01-31 16:32:41.105044994 +0000 UTC m=+147.911101910" Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.105895 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qf5fm" podStartSLOduration=126.105890999 podStartE2EDuration="2m6.105890999s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:41.021668859 +0000 UTC m=+147.827725765" watchObservedRunningTime="2026-01-31 16:32:41.105890999 +0000 UTC m=+147.911947915" Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.122529 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bzzjv" event={"ID":"7798a0ca-0eb6-49e0-b531-e021ddbb7587","Type":"ContainerStarted","Data":"2339c8d7ecdd2b3cb5e902e2894149385f2ce731ba66750cf81b2127946062dc"} Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.128510 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" event={"ID":"d6d0cf39-4835-4f5d-8c5a-9521331913ac","Type":"ContainerStarted","Data":"145ef8c915165b571490b4e9b80525d466762bece7835f637e874248074aadf3"} Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.134231 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh" event={"ID":"d96093ab-8af5-4e3c-b89e-601cd9581b80","Type":"ContainerStarted","Data":"f49a90624ff9484e002791a56a08d14482867a2eda297767da05e2f11bdae32c"} Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.135754 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ntvr6" event={"ID":"edec59e3-15cb-4032-a5a9-e25e12cc6e9e","Type":"ContainerStarted","Data":"804ca6c51cce896944f3dc6c3826c2dbc9aca9393562bb2884b69be3afcf3916"} Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.136623 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wc5vj" event={"ID":"469740fc-098b-4156-b459-02d7a1afefab","Type":"ContainerStarted","Data":"cc0b4409ccb5a29cd27e77bdc1219091107856bc3ad82be8a16b25b3c3b8003b"} Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.156534 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" event={"ID":"0057e5b1-8c91-43c4-86ed-337c6e69caf9","Type":"ContainerStarted","Data":"c7b9890fd8e0fb1580a9f831894a7c468f1d09716e3cccc78b2ef353222906dd"} Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.157727 4730 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-tnfvq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.157753 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" podUID="0057e5b1-8c91-43c4-86ed-337c6e69caf9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.163553 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" podStartSLOduration=126.163536876 podStartE2EDuration="2m6.163536876s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:41.147386755 +0000 UTC m=+147.953443671" watchObservedRunningTime="2026-01-31 16:32:41.163536876 +0000 UTC m=+147.969593792" Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.186495 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:41 crc kubenswrapper[4730]: E0131 16:32:41.187522 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:41.68750576 +0000 UTC m=+148.493562676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.189174 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lc466" event={"ID":"ee6fccbc-e15d-4cbb-a200-b77420363b3f","Type":"ContainerStarted","Data":"7440a63eb6488fe61dde69da4a0c25e7cbf21971ebc8ae5f63fc00fdd4e5cb78"} Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.202196 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5vcn4" event={"ID":"6ed0e8d6-c52f-421e-afc6-58098dfaf5a8","Type":"ContainerStarted","Data":"12043e8de2d6ebce15e552c650f5f8a50b0c88d79d9853d9770e38f5f8cf4dff"} Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.203402 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" event={"ID":"216a1f0f-785a-4dfa-b084-501b799637b7","Type":"ContainerStarted","Data":"019f2c49b2de13a34c0874aa1bfd89c7cc4b8b7b0e3798ab3ff2a5d4cc98b3ac"} Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.205116 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl66m" event={"ID":"a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55","Type":"ContainerStarted","Data":"a75811566c9c27f714be2a3aeacfc2cabdd46067274a99a43a9742257837e1aa"} Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.206139 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6v2xk" event={"ID":"8100d0f3-9c7f-4835-b98a-c79cc76c29ef","Type":"ContainerStarted","Data":"73e489a502d4014d96663b5efda6f88d633a4f2b9540446a0af47a25f929d416"} Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.226465 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" event={"ID":"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3","Type":"ContainerStarted","Data":"6c8a09f51b421be504729db93b18cc5f426a225fef3cb1a5c7bcddaead624089"} Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.245393 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" event={"ID":"983dfbbb-8bc4-4935-b359-c885fc748600","Type":"ContainerStarted","Data":"76165b7009398e780ffb3ccf80f157ec039ee316dffbe5b8854d5adbd40036db"} Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.246001 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bcp4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.246042 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2bcp4" podUID="e8d1e83c-c1a5-4565-b1bc-454b416c6039" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.287614 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:41 crc kubenswrapper[4730]: E0131 16:32:41.288882 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:41.78886273 +0000 UTC m=+148.594919716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.347045 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-cp5tf" podStartSLOduration=126.347030583 podStartE2EDuration="2m6.347030583s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:41.332966214 +0000 UTC m=+148.139023130" watchObservedRunningTime="2026-01-31 16:32:41.347030583 +0000 UTC m=+148.153087499" Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.388432 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:41 crc kubenswrapper[4730]: E0131 16:32:41.390413 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:41.890391165 +0000 UTC m=+148.696448071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.484230 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nxqw5" podStartSLOduration=126.48421461 podStartE2EDuration="2m6.48421461s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:41.420034198 +0000 UTC m=+148.226091104" watchObservedRunningTime="2026-01-31 16:32:41.48421461 +0000 UTC m=+148.290271526" Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.496909 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:41 crc kubenswrapper[4730]: E0131 16:32:41.497228 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:41.997216108 +0000 UTC m=+148.803273024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.535450 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rdbnh" podStartSLOduration=126.535434706 podStartE2EDuration="2m6.535434706s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:41.491618801 +0000 UTC m=+148.297675717" watchObservedRunningTime="2026-01-31 16:32:41.535434706 +0000 UTC m=+148.341491622" Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.543259 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-j5kgc" podStartSLOduration=125.543241529 podStartE2EDuration="2m5.543241529s" podCreationTimestamp="2026-01-31 16:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:41.534131768 +0000 UTC m=+148.340188684" watchObservedRunningTime="2026-01-31 16:32:41.543241529 +0000 UTC m=+148.349298445" Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.572557 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-6v2xk" podStartSLOduration=126.572541632 podStartE2EDuration="2m6.572541632s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:41.571321706 +0000 UTC m=+148.377378622" watchObservedRunningTime="2026-01-31 16:32:41.572541632 +0000 UTC m=+148.378598548" Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.598958 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:41 crc kubenswrapper[4730]: E0131 16:32:41.599383 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:42.099368121 +0000 UTC m=+148.905425037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.708780 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:41 crc kubenswrapper[4730]: E0131 16:32:41.709166 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:42.209153832 +0000 UTC m=+149.015210758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.790902 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.809730 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:41 crc kubenswrapper[4730]: E0131 16:32:41.810025 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:42.310009687 +0000 UTC m=+149.116066593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:41 crc kubenswrapper[4730]: I0131 16:32:41.910889 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:41 crc kubenswrapper[4730]: E0131 16:32:41.911533 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:42.411522381 +0000 UTC m=+149.217579297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.013243 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:42 crc kubenswrapper[4730]: E0131 16:32:42.013361 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:42.513344665 +0000 UTC m=+149.319401581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.013589 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:42 crc kubenswrapper[4730]: E0131 16:32:42.013874 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:42.513866491 +0000 UTC m=+149.319923407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.079379 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:42 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:42 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:42 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.079808 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.115462 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:42 crc kubenswrapper[4730]: E0131 16:32:42.115877 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:42.615859239 +0000 UTC m=+149.421916155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.217484 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:42 crc kubenswrapper[4730]: E0131 16:32:42.217821 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:42.717793525 +0000 UTC m=+149.523850431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.245963 4730 patch_prober.go:28] interesting pod/console-operator-58897d9998-28kdr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.246050 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-28kdr" podUID="0d637d59-da07-4756-8234-e17cba93e1b0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.251962 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ks8gz" event={"ID":"5104074c-31a4-4e5f-af89-97ad9a1ab8ad","Type":"ContainerStarted","Data":"2389545d7f99be00877eee32f9f707d8aa4f488725771666d9791e51765bfe4c"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.253632 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" event={"ID":"fd7b5061-34b1-4b64-a7fc-1b4a0b70b366","Type":"ContainerStarted","Data":"728c096f3c1f631d194face1f217febbe80c70efc01630b7d9e136083a2f1c71"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.255954 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.256027 4730 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9hl7b container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.256061 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" podUID="fd7b5061-34b1-4b64-a7fc-1b4a0b70b366" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.257941 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv" event={"ID":"e0ac8516-d776-4d92-933e-1d6a8d427d5f","Type":"ContainerStarted","Data":"336409b84294e03019e0e2335a2d08184bf9a4addced96496bbb5a742f6e7209"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.262574 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.262710 4730 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-ttmdv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.262748 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv" podUID="e0ac8516-d776-4d92-933e-1d6a8d427d5f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.267231 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ntvr6" event={"ID":"edec59e3-15cb-4032-a5a9-e25e12cc6e9e","Type":"ContainerStarted","Data":"e084d1ee1c4b0889a434f9728ca438d2168a9867f4138baf98b119187a3baa56"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.267267 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ntvr6" event={"ID":"edec59e3-15cb-4032-a5a9-e25e12cc6e9e","Type":"ContainerStarted","Data":"ccdeb03c16cc3101519164e4c74c2d927422b916f535bf9168a3222b818b1dd2"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.269157 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw" event={"ID":"e6f6285d-f680-4eec-ad4d-b9375b31bd21","Type":"ContainerStarted","Data":"214b6d2d078e3710032bd17338302205e3a291b11fae1ca6f3b1d9d50039e3f2"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.269183 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw" event={"ID":"e6f6285d-f680-4eec-ad4d-b9375b31bd21","Type":"ContainerStarted","Data":"e0478b81fbb7d7016faaee89784ae2b49e5fa74b6e114f96f75c9471fd389fcf"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.270823 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" event={"ID":"983dfbbb-8bc4-4935-b359-c885fc748600","Type":"ContainerStarted","Data":"2aa753802bc81032924a2a7f6a60b847f656802f80e503ad20290e605757bc52"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.272691 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" event={"ID":"f2b47509-6f1d-40c5-94d7-10aa37fa5dce","Type":"ContainerStarted","Data":"14e8d12b1295d0c79a95f5116c2b70bb2c5e0cf293b9358ee38c75084ea32d23"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.273040 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.274480 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5vcn4" event={"ID":"6ed0e8d6-c52f-421e-afc6-58098dfaf5a8","Type":"ContainerStarted","Data":"b87d7df165a0f23a24205e93a208013bc1759b9574734d1d8cff41bfc90dcb4f"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.274858 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5vcn4" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.284335 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wql8z" event={"ID":"6795c6e3-2333-4112-9ee7-b6074347208b","Type":"ContainerStarted","Data":"9848a0db480a592e5115334b65dbf9571f55684afffd71294bbfd435dc8469e9"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.285998 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6tq7" event={"ID":"201151bb-7b5e-4564-ae1c-9b0b76e19778","Type":"ContainerStarted","Data":"f1ff5a6fea4d90b6e4e52c3f3cc3354d6bd71dfb0a1658bddc96073ac128afa2"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.294053 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" event={"ID":"5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3","Type":"ContainerStarted","Data":"65e8db8283c8b82d6c7f34e15397cb75c79509d7622e2df6a603e01ae8928312"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.296317 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wc5vj" event={"ID":"469740fc-098b-4156-b459-02d7a1afefab","Type":"ContainerStarted","Data":"fe2d16864ed11c12911fc401c2fa77d7b074d83ec984d2eda03638350ddd5bfc"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.297954 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" event={"ID":"216a1f0f-785a-4dfa-b084-501b799637b7","Type":"ContainerStarted","Data":"f4ce9d82463db115d569128a52e4e558b23c0edbfebda94a77af4fa61a2b5b38"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.300235 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" podStartSLOduration=127.300222951 podStartE2EDuration="2m7.300222951s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:42.298923772 +0000 UTC m=+149.104980688" watchObservedRunningTime="2026-01-31 16:32:42.300222951 +0000 UTC m=+149.106279877" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.302739 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" event={"ID":"d6d0cf39-4835-4f5d-8c5a-9521331913ac","Type":"ContainerStarted","Data":"fdc2f6478d2d8b66745ef7cab46b6aec9fa0a64248ace20bc382a6224c540f69"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.304770 4730 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-txbq6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.304909 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.305052 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" podUID="d6d0cf39-4835-4f5d-8c5a-9521331913ac" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.311061 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl66m" event={"ID":"a8fcc0a6-92b1-4ae2-bc72-b67dc7e5dc55","Type":"ContainerStarted","Data":"71b820cf07789b058db9f41b9d630fdef43543aa36854e1ed8257b585daebadd"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.315613 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5xfm" event={"ID":"3ba1ee3d-4cef-4fc3-8c31-5f544dd56244","Type":"ContainerStarted","Data":"6b22d12f2a1efa417ec0f25e2c33a9ff7463408f104b86cdabba8d29f5006f43"} Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.318406 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.318683 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.318709 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.318737 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.318770 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:42 crc kubenswrapper[4730]: E0131 16:32:42.319784 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:42.819755493 +0000 UTC m=+149.625812409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.320353 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.325934 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.343358 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tnfvq" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.343397 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.343485 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.436532 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.456346 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6tq7" podStartSLOduration=127.456330312 podStartE2EDuration="2m7.456330312s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:42.372702321 +0000 UTC m=+149.178759237" watchObservedRunningTime="2026-01-31 16:32:42.456330312 +0000 UTC m=+149.262387228" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.461166 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5k6fd" podStartSLOduration=127.461156736 podStartE2EDuration="2m7.461156736s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:42.453981552 +0000 UTC m=+149.260038468" watchObservedRunningTime="2026-01-31 16:32:42.461156736 +0000 UTC m=+149.267213652" Jan 31 16:32:42 crc kubenswrapper[4730]: E0131 16:32:42.472005 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:42.971977698 +0000 UTC m=+149.778034614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.583031 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.593438 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:42 crc kubenswrapper[4730]: E0131 16:32:42.593852 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:43.093834939 +0000 UTC m=+149.899891855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.606270 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.616541 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.621719 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vcssw" podStartSLOduration=127.621706259 podStartE2EDuration="2m7.621706259s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:42.565214396 +0000 UTC m=+149.371271312" watchObservedRunningTime="2026-01-31 16:32:42.621706259 +0000 UTC m=+149.427763175" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.680865 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-wc5vj" podStartSLOduration=8.680847171 podStartE2EDuration="8.680847171s" podCreationTimestamp="2026-01-31 16:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:42.626019238 +0000 UTC m=+149.432076144" watchObservedRunningTime="2026-01-31 16:32:42.680847171 +0000 UTC m=+149.486904087" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.696589 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:42 crc kubenswrapper[4730]: E0131 16:32:42.696910 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:43.196897139 +0000 UTC m=+150.002954055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.743041 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wql8z" podStartSLOduration=127.743025934 podStartE2EDuration="2m7.743025934s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:42.68718735 +0000 UTC m=+149.493244266" watchObservedRunningTime="2026-01-31 16:32:42.743025934 +0000 UTC m=+149.549082850" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.774216 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" podStartSLOduration=127.774200063 podStartE2EDuration="2m7.774200063s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:42.744488227 +0000 UTC m=+149.550545143" watchObservedRunningTime="2026-01-31 16:32:42.774200063 +0000 UTC m=+149.580256979" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.776875 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ntvr6" podStartSLOduration=127.776868622 podStartE2EDuration="2m7.776868622s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:42.776303535 +0000 UTC m=+149.582360441" watchObservedRunningTime="2026-01-31 16:32:42.776868622 +0000 UTC m=+149.582925538" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.802349 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:42 crc kubenswrapper[4730]: E0131 16:32:42.803014 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:43.302996111 +0000 UTC m=+150.109053027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.823290 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv" podStartSLOduration=127.823274825 podStartE2EDuration="2m7.823274825s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:42.820642796 +0000 UTC m=+149.626699712" watchObservedRunningTime="2026-01-31 16:32:42.823274825 +0000 UTC m=+149.629331741" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.848317 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5vcn4" podStartSLOduration=9.84830347 podStartE2EDuration="9.84830347s" podCreationTimestamp="2026-01-31 16:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:42.847234149 +0000 UTC m=+149.653291065" watchObservedRunningTime="2026-01-31 16:32:42.84830347 +0000 UTC m=+149.654360376" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.895520 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" podStartSLOduration=127.895504487 podStartE2EDuration="2m7.895504487s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:42.894909529 +0000 UTC m=+149.700966455" watchObservedRunningTime="2026-01-31 16:32:42.895504487 +0000 UTC m=+149.701561403" Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.904226 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:42 crc kubenswrapper[4730]: E0131 16:32:42.904590 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:43.404572427 +0000 UTC m=+150.210629343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:42 crc kubenswrapper[4730]: I0131 16:32:42.921293 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5xfm" podStartSLOduration=127.921280585 podStartE2EDuration="2m7.921280585s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:42.917516492 +0000 UTC m=+149.723573398" watchObservedRunningTime="2026-01-31 16:32:42.921280585 +0000 UTC m=+149.727337501" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.007906 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:43 crc kubenswrapper[4730]: E0131 16:32:43.008179 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:43.508163593 +0000 UTC m=+150.314220509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.008960 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" podStartSLOduration=128.008944986 podStartE2EDuration="2m8.008944986s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:42.977723796 +0000 UTC m=+149.783780712" watchObservedRunningTime="2026-01-31 16:32:43.008944986 +0000 UTC m=+149.815001912" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.084938 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:43 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:43 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:43 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.084992 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.111671 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:43 crc kubenswrapper[4730]: E0131 16:32:43.112415 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:43.612404319 +0000 UTC m=+150.418461235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.213208 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:43 crc kubenswrapper[4730]: E0131 16:32:43.213575 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:43.713554172 +0000 UTC m=+150.519611088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.315019 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:43 crc kubenswrapper[4730]: E0131 16:32:43.315333 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:43.815320304 +0000 UTC m=+150.621377220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.337644 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ks8gz" event={"ID":"5104074c-31a4-4e5f-af89-97ad9a1ab8ad","Type":"ContainerStarted","Data":"e659427cc198a9b14baa5f0b0f67e4201ce2aa6597be7306ed6b53131f117d65"} Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.339480 4730 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-txbq6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.339521 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" podUID="d6d0cf39-4835-4f5d-8c5a-9521331913ac" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.371153 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fl66m" podStartSLOduration=128.371139708 podStartE2EDuration="2m8.371139708s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:43.05303885 +0000 UTC m=+149.859095766" watchObservedRunningTime="2026-01-31 16:32:43.371139708 +0000 UTC m=+150.177196624" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.372368 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7jq8n"] Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.373531 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jq8n" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.380052 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.391016 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-ks8gz" podStartSLOduration=128.391001959 podStartE2EDuration="2m8.391001959s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:43.390818784 +0000 UTC m=+150.196875710" watchObservedRunningTime="2026-01-31 16:32:43.391001959 +0000 UTC m=+150.197058875" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.415642 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:43 crc kubenswrapper[4730]: E0131 16:32:43.417161 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:43.917139548 +0000 UTC m=+150.723196464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.459619 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7jq8n"] Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.483686 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttmdv" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.518682 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d9x9\" (UniqueName: \"kubernetes.io/projected/24e875c6-16c4-43f2-8533-7d1af60844fb-kube-api-access-8d9x9\") pod \"community-operators-7jq8n\" (UID: \"24e875c6-16c4-43f2-8533-7d1af60844fb\") " pod="openshift-marketplace/community-operators-7jq8n" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.518734 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.518758 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24e875c6-16c4-43f2-8533-7d1af60844fb-utilities\") pod \"community-operators-7jq8n\" (UID: \"24e875c6-16c4-43f2-8533-7d1af60844fb\") " pod="openshift-marketplace/community-operators-7jq8n" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.518793 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24e875c6-16c4-43f2-8533-7d1af60844fb-catalog-content\") pod \"community-operators-7jq8n\" (UID: \"24e875c6-16c4-43f2-8533-7d1af60844fb\") " pod="openshift-marketplace/community-operators-7jq8n" Jan 31 16:32:43 crc kubenswrapper[4730]: E0131 16:32:43.519060 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:44.019047664 +0000 UTC m=+150.825104570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.619786 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:43 crc kubenswrapper[4730]: E0131 16:32:43.619967 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:44.11993245 +0000 UTC m=+150.925989366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.620285 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d9x9\" (UniqueName: \"kubernetes.io/projected/24e875c6-16c4-43f2-8533-7d1af60844fb-kube-api-access-8d9x9\") pod \"community-operators-7jq8n\" (UID: \"24e875c6-16c4-43f2-8533-7d1af60844fb\") " pod="openshift-marketplace/community-operators-7jq8n" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.620329 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.620382 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24e875c6-16c4-43f2-8533-7d1af60844fb-utilities\") pod \"community-operators-7jq8n\" (UID: \"24e875c6-16c4-43f2-8533-7d1af60844fb\") " pod="openshift-marketplace/community-operators-7jq8n" Jan 31 16:32:43 crc kubenswrapper[4730]: E0131 16:32:43.620771 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:44.120749914 +0000 UTC m=+150.926806900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.620844 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24e875c6-16c4-43f2-8533-7d1af60844fb-utilities\") pod \"community-operators-7jq8n\" (UID: \"24e875c6-16c4-43f2-8533-7d1af60844fb\") " pod="openshift-marketplace/community-operators-7jq8n" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.620417 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24e875c6-16c4-43f2-8533-7d1af60844fb-catalog-content\") pod \"community-operators-7jq8n\" (UID: \"24e875c6-16c4-43f2-8533-7d1af60844fb\") " pod="openshift-marketplace/community-operators-7jq8n" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.620941 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24e875c6-16c4-43f2-8533-7d1af60844fb-catalog-content\") pod \"community-operators-7jq8n\" (UID: \"24e875c6-16c4-43f2-8533-7d1af60844fb\") " pod="openshift-marketplace/community-operators-7jq8n" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.655150 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xwsps"] Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.657558 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwsps" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.721676 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:43 crc kubenswrapper[4730]: E0131 16:32:43.721862 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:44.221814685 +0000 UTC m=+151.027871601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.721906 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnbb5\" (UniqueName: \"kubernetes.io/projected/e8d7fc22-9a5c-4569-821d-c915ab1f5657-kube-api-access-qnbb5\") pod \"certified-operators-xwsps\" (UID: \"e8d7fc22-9a5c-4569-821d-c915ab1f5657\") " pod="openshift-marketplace/certified-operators-xwsps" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.722062 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d7fc22-9a5c-4569-821d-c915ab1f5657-utilities\") pod \"certified-operators-xwsps\" (UID: \"e8d7fc22-9a5c-4569-821d-c915ab1f5657\") " pod="openshift-marketplace/certified-operators-xwsps" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.722180 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.722203 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d7fc22-9a5c-4569-821d-c915ab1f5657-catalog-content\") pod \"certified-operators-xwsps\" (UID: \"e8d7fc22-9a5c-4569-821d-c915ab1f5657\") " pod="openshift-marketplace/certified-operators-xwsps" Jan 31 16:32:43 crc kubenswrapper[4730]: E0131 16:32:43.722490 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:44.222477815 +0000 UTC m=+151.028534731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.725215 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.759128 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d9x9\" (UniqueName: \"kubernetes.io/projected/24e875c6-16c4-43f2-8533-7d1af60844fb-kube-api-access-8d9x9\") pod \"community-operators-7jq8n\" (UID: \"24e875c6-16c4-43f2-8533-7d1af60844fb\") " pod="openshift-marketplace/community-operators-7jq8n" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.782138 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xwsps"] Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.823070 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:43 crc kubenswrapper[4730]: E0131 16:32:43.823265 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:44.323238817 +0000 UTC m=+151.129295733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.823411 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnbb5\" (UniqueName: \"kubernetes.io/projected/e8d7fc22-9a5c-4569-821d-c915ab1f5657-kube-api-access-qnbb5\") pod \"certified-operators-xwsps\" (UID: \"e8d7fc22-9a5c-4569-821d-c915ab1f5657\") " pod="openshift-marketplace/certified-operators-xwsps" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.823495 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d7fc22-9a5c-4569-821d-c915ab1f5657-utilities\") pod \"certified-operators-xwsps\" (UID: \"e8d7fc22-9a5c-4569-821d-c915ab1f5657\") " pod="openshift-marketplace/certified-operators-xwsps" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.823554 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.823573 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d7fc22-9a5c-4569-821d-c915ab1f5657-catalog-content\") pod \"certified-operators-xwsps\" (UID: \"e8d7fc22-9a5c-4569-821d-c915ab1f5657\") " pod="openshift-marketplace/certified-operators-xwsps" Jan 31 16:32:43 crc kubenswrapper[4730]: E0131 16:32:43.823957 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:44.323946188 +0000 UTC m=+151.130003104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.824034 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d7fc22-9a5c-4569-821d-c915ab1f5657-catalog-content\") pod \"certified-operators-xwsps\" (UID: \"e8d7fc22-9a5c-4569-821d-c915ab1f5657\") " pod="openshift-marketplace/certified-operators-xwsps" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.824139 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d7fc22-9a5c-4569-821d-c915ab1f5657-utilities\") pod \"certified-operators-xwsps\" (UID: \"e8d7fc22-9a5c-4569-821d-c915ab1f5657\") " pod="openshift-marketplace/certified-operators-xwsps" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.843561 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vnkqr"] Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.844448 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnkqr" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.924366 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.924528 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77d01ac-b8b8-436b-9626-6230af5c95b7-utilities\") pod \"community-operators-vnkqr\" (UID: \"f77d01ac-b8b8-436b-9626-6230af5c95b7\") " pod="openshift-marketplace/community-operators-vnkqr" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.924597 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5npw6\" (UniqueName: \"kubernetes.io/projected/f77d01ac-b8b8-436b-9626-6230af5c95b7-kube-api-access-5npw6\") pod \"community-operators-vnkqr\" (UID: \"f77d01ac-b8b8-436b-9626-6230af5c95b7\") " pod="openshift-marketplace/community-operators-vnkqr" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.924624 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77d01ac-b8b8-436b-9626-6230af5c95b7-catalog-content\") pod \"community-operators-vnkqr\" (UID: \"f77d01ac-b8b8-436b-9626-6230af5c95b7\") " pod="openshift-marketplace/community-operators-vnkqr" Jan 31 16:32:43 crc kubenswrapper[4730]: E0131 16:32:43.924755 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:44.424732481 +0000 UTC m=+151.230789397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.968601 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnbb5\" (UniqueName: \"kubernetes.io/projected/e8d7fc22-9a5c-4569-821d-c915ab1f5657-kube-api-access-qnbb5\") pod \"certified-operators-xwsps\" (UID: \"e8d7fc22-9a5c-4569-821d-c915ab1f5657\") " pod="openshift-marketplace/certified-operators-xwsps" Jan 31 16:32:43 crc kubenswrapper[4730]: I0131 16:32:43.970431 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwsps" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.002041 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jq8n" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.029574 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vnkqr"] Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.030350 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5npw6\" (UniqueName: \"kubernetes.io/projected/f77d01ac-b8b8-436b-9626-6230af5c95b7-kube-api-access-5npw6\") pod \"community-operators-vnkqr\" (UID: \"f77d01ac-b8b8-436b-9626-6230af5c95b7\") " pod="openshift-marketplace/community-operators-vnkqr" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.030383 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77d01ac-b8b8-436b-9626-6230af5c95b7-catalog-content\") pod \"community-operators-vnkqr\" (UID: \"f77d01ac-b8b8-436b-9626-6230af5c95b7\") " pod="openshift-marketplace/community-operators-vnkqr" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.030440 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.030458 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77d01ac-b8b8-436b-9626-6230af5c95b7-utilities\") pod \"community-operators-vnkqr\" (UID: \"f77d01ac-b8b8-436b-9626-6230af5c95b7\") " pod="openshift-marketplace/community-operators-vnkqr" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.030940 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77d01ac-b8b8-436b-9626-6230af5c95b7-utilities\") pod \"community-operators-vnkqr\" (UID: \"f77d01ac-b8b8-436b-9626-6230af5c95b7\") " pod="openshift-marketplace/community-operators-vnkqr" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.031158 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77d01ac-b8b8-436b-9626-6230af5c95b7-catalog-content\") pod \"community-operators-vnkqr\" (UID: \"f77d01ac-b8b8-436b-9626-6230af5c95b7\") " pod="openshift-marketplace/community-operators-vnkqr" Jan 31 16:32:44 crc kubenswrapper[4730]: E0131 16:32:44.031492 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:44.531467301 +0000 UTC m=+151.337524217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.089884 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:44 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:44 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:44 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.090372 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.119432 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-px9cf"] Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.121909 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-px9cf" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.133681 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:44 crc kubenswrapper[4730]: E0131 16:32:44.134097 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:44.634077248 +0000 UTC m=+151.440134164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.138866 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5npw6\" (UniqueName: \"kubernetes.io/projected/f77d01ac-b8b8-436b-9626-6230af5c95b7-kube-api-access-5npw6\") pod \"community-operators-vnkqr\" (UID: \"f77d01ac-b8b8-436b-9626-6230af5c95b7\") " pod="openshift-marketplace/community-operators-vnkqr" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.158358 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnkqr" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.250600 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.250671 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-utilities\") pod \"certified-operators-px9cf\" (UID: \"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e\") " pod="openshift-marketplace/certified-operators-px9cf" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.250703 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj49s\" (UniqueName: \"kubernetes.io/projected/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-kube-api-access-nj49s\") pod \"certified-operators-px9cf\" (UID: \"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e\") " pod="openshift-marketplace/certified-operators-px9cf" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.250725 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-catalog-content\") pod \"certified-operators-px9cf\" (UID: \"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e\") " pod="openshift-marketplace/certified-operators-px9cf" Jan 31 16:32:44 crc kubenswrapper[4730]: E0131 16:32:44.251025 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:44.751014292 +0000 UTC m=+151.557071208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.285666 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-px9cf"] Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.343782 4730 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9hl7b container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.344024 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" podUID="fd7b5061-34b1-4b64-a7fc-1b4a0b70b366" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.354443 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.354636 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-utilities\") pod \"certified-operators-px9cf\" (UID: \"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e\") " pod="openshift-marketplace/certified-operators-px9cf" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.354679 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj49s\" (UniqueName: \"kubernetes.io/projected/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-kube-api-access-nj49s\") pod \"certified-operators-px9cf\" (UID: \"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e\") " pod="openshift-marketplace/certified-operators-px9cf" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.354706 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-catalog-content\") pod \"certified-operators-px9cf\" (UID: \"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e\") " pod="openshift-marketplace/certified-operators-px9cf" Jan 31 16:32:44 crc kubenswrapper[4730]: E0131 16:32:44.354976 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:44.854934118 +0000 UTC m=+151.660991024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.355147 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-catalog-content\") pod \"certified-operators-px9cf\" (UID: \"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e\") " pod="openshift-marketplace/certified-operators-px9cf" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.355289 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-utilities\") pod \"certified-operators-px9cf\" (UID: \"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e\") " pod="openshift-marketplace/certified-operators-px9cf" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.390438 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj49s\" (UniqueName: \"kubernetes.io/projected/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-kube-api-access-nj49s\") pod \"certified-operators-px9cf\" (UID: \"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e\") " pod="openshift-marketplace/certified-operators-px9cf" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.401984 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"517706b5919478c29fdba9cca95ad69c0e12ec20b946f7120d8e187aab015a33"} Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.439042 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" event={"ID":"216a1f0f-785a-4dfa-b084-501b799637b7","Type":"ContainerStarted","Data":"c0b2c4c3fb9bce3c85091f5945be74efe876bdee711de523ca7e850d117f004d"} Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.439852 4730 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-txbq6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.439901 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" podUID="d6d0cf39-4835-4f5d-8c5a-9521331913ac" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.455717 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:44 crc kubenswrapper[4730]: E0131 16:32:44.456073 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:44.956061161 +0000 UTC m=+151.762118067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.469871 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-px9cf" Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.569662 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:44 crc kubenswrapper[4730]: E0131 16:32:44.571638 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:45.071611994 +0000 UTC m=+151.877668910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.671411 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:44 crc kubenswrapper[4730]: E0131 16:32:44.671739 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:45.171727576 +0000 UTC m=+151.977784492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:44 crc kubenswrapper[4730]: W0131 16:32:44.711850 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-f7ce22b7840a3c12c4a27210961105775062e8a5816f7d5c41b1f3e2bc5d6980 WatchSource:0}: Error finding container f7ce22b7840a3c12c4a27210961105775062e8a5816f7d5c41b1f3e2bc5d6980: Status 404 returned error can't find the container with id f7ce22b7840a3c12c4a27210961105775062e8a5816f7d5c41b1f3e2bc5d6980 Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.780619 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:44 crc kubenswrapper[4730]: E0131 16:32:44.780991 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:45.280974931 +0000 UTC m=+152.087031847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:44 crc kubenswrapper[4730]: W0131 16:32:44.788970 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-75d6123b1c39d076f2c073f7530ef1b3764db3722d2ca8352f6e576313acd84d WatchSource:0}: Error finding container 75d6123b1c39d076f2c073f7530ef1b3764db3722d2ca8352f6e576313acd84d: Status 404 returned error can't find the container with id 75d6123b1c39d076f2c073f7530ef1b3764db3722d2ca8352f6e576313acd84d Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.882453 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:44 crc kubenswrapper[4730]: E0131 16:32:44.883103 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:45.383090104 +0000 UTC m=+152.189147020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:44 crc kubenswrapper[4730]: I0131 16:32:44.984553 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:44 crc kubenswrapper[4730]: E0131 16:32:44.985010 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:45.48499219 +0000 UTC m=+152.291049106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.044543 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xwsps"] Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.083777 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:45 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:45 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:45 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.083862 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.086893 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:45 crc kubenswrapper[4730]: E0131 16:32:45.087196 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:45.587185985 +0000 UTC m=+152.393242901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.145222 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9hl7b" Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.187536 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:45 crc kubenswrapper[4730]: E0131 16:32:45.187835 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:45.687819993 +0000 UTC m=+152.493876909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.291700 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:45 crc kubenswrapper[4730]: E0131 16:32:45.292048 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:45.792036808 +0000 UTC m=+152.598093724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.395662 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:45 crc kubenswrapper[4730]: E0131 16:32:45.395817 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:45.895780979 +0000 UTC m=+152.701837895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.395895 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:45 crc kubenswrapper[4730]: E0131 16:32:45.396198 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:45.896189371 +0000 UTC m=+152.702246277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.405139 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vnkqr"] Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.441999 4730 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ggbf6 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.442061 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" podUID="f2b47509-6f1d-40c5-94d7-10aa37fa5dce" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.480569 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwsps" event={"ID":"e8d7fc22-9a5c-4569-821d-c915ab1f5657","Type":"ContainerStarted","Data":"d32e6d6aae1c290bc0d67b957f32c8090fa6732f34905e6ef2d56871c1a3d4c5"} Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.498318 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:45 crc kubenswrapper[4730]: E0131 16:32:45.498643 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:45.998627253 +0000 UTC m=+152.804684169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.531209 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2fcc2e0bb98167c57ad2a7b523b92dc610d35e6fa48557879d44079a9c97b1e6"} Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.531480 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f7ce22b7840a3c12c4a27210961105775062e8a5816f7d5c41b1f3e2bc5d6980"} Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.532245 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.561831 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7jq8n"] Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.562069 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.562194 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.580315 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ea2dea43d2063dd1d71be61c0bd3614b19cc4f1e2432549112b521936e634046"} Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.580365 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"75d6123b1c39d076f2c073f7530ef1b3764db3722d2ca8352f6e576313acd84d"} Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.585261 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6c5a59fbbaec948f7a0fe7daae3ff4802c8fbf4bd714470b928ff268b17aa737"} Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.586731 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.599494 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:45 crc kubenswrapper[4730]: E0131 16:32:45.601060 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:46.101035204 +0000 UTC m=+152.907092120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.611365 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.613547 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.615580 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" event={"ID":"216a1f0f-785a-4dfa-b084-501b799637b7","Type":"ContainerStarted","Data":"a3b50c14e3be80b90b1975a5530337dd62063125c82895bb15c4a4ea06193600"} Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.702063 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:45 crc kubenswrapper[4730]: E0131 16:32:45.703305 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:46.20327476 +0000 UTC m=+153.009331676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.764958 4730 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4cfvt container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 16:32:45 crc kubenswrapper[4730]: [+]log ok Jan 31 16:32:45 crc kubenswrapper[4730]: [+]etcd ok Jan 31 16:32:45 crc kubenswrapper[4730]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 16:32:45 crc kubenswrapper[4730]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 16:32:45 crc kubenswrapper[4730]: [+]poststarthook/max-in-flight-filter ok Jan 31 16:32:45 crc kubenswrapper[4730]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 16:32:45 crc kubenswrapper[4730]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 31 16:32:45 crc kubenswrapper[4730]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 31 16:32:45 crc kubenswrapper[4730]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 31 16:32:45 crc kubenswrapper[4730]: [+]poststarthook/project.openshift.io-projectcache ok Jan 31 16:32:45 crc kubenswrapper[4730]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 31 16:32:45 crc kubenswrapper[4730]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Jan 31 16:32:45 crc kubenswrapper[4730]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 31 16:32:45 crc kubenswrapper[4730]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 16:32:45 crc kubenswrapper[4730]: livez check failed Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.765018 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" podUID="5fb1cd7c-cc3f-4b59-9db9-8294120bd5f3" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.803289 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:45 crc kubenswrapper[4730]: E0131 16:32:45.803566 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:46.303554937 +0000 UTC m=+153.109611843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:45 crc kubenswrapper[4730]: E0131 16:32:45.814771 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8d7fc22_9a5c_4569_821d_c915ab1f5657.slice/crio-conmon-ebef7ca4aaf8d3200904173efd81eeea44d0eebd2daff37cf9274dfffdd53952.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8d7fc22_9a5c_4569_821d_c915ab1f5657.slice/crio-ebef7ca4aaf8d3200904173efd81eeea44d0eebd2daff37cf9274dfffdd53952.scope\": RecentStats: unable to find data in memory cache]" Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.831147 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-px9cf"] Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.833439 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7c9rs"] Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.834399 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7c9rs" Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.884457 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7c9rs"] Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.906633 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.907118 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmgpt\" (UniqueName: \"kubernetes.io/projected/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-kube-api-access-xmgpt\") pod \"redhat-marketplace-7c9rs\" (UID: \"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5\") " pod="openshift-marketplace/redhat-marketplace-7c9rs" Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.907230 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-utilities\") pod \"redhat-marketplace-7c9rs\" (UID: \"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5\") " pod="openshift-marketplace/redhat-marketplace-7c9rs" Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.907302 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-catalog-content\") pod \"redhat-marketplace-7c9rs\" (UID: \"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5\") " pod="openshift-marketplace/redhat-marketplace-7c9rs" Jan 31 16:32:45 crc kubenswrapper[4730]: E0131 16:32:45.907658 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:46.407631137 +0000 UTC m=+153.213688053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:45 crc kubenswrapper[4730]: I0131 16:32:45.975383 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.009279 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.009342 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmgpt\" (UniqueName: \"kubernetes.io/projected/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-kube-api-access-xmgpt\") pod \"redhat-marketplace-7c9rs\" (UID: \"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5\") " pod="openshift-marketplace/redhat-marketplace-7c9rs" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.009371 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-utilities\") pod \"redhat-marketplace-7c9rs\" (UID: \"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5\") " pod="openshift-marketplace/redhat-marketplace-7c9rs" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.009391 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-catalog-content\") pod \"redhat-marketplace-7c9rs\" (UID: \"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5\") " pod="openshift-marketplace/redhat-marketplace-7c9rs" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.009814 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-catalog-content\") pod \"redhat-marketplace-7c9rs\" (UID: \"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5\") " pod="openshift-marketplace/redhat-marketplace-7c9rs" Jan 31 16:32:46 crc kubenswrapper[4730]: E0131 16:32:46.010058 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:46.510047629 +0000 UTC m=+153.316104545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.010565 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-utilities\") pod \"redhat-marketplace-7c9rs\" (UID: \"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5\") " pod="openshift-marketplace/redhat-marketplace-7c9rs" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.086669 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmgpt\" (UniqueName: \"kubernetes.io/projected/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-kube-api-access-xmgpt\") pod \"redhat-marketplace-7c9rs\" (UID: \"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5\") " pod="openshift-marketplace/redhat-marketplace-7c9rs" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.087090 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:46 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:46 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:46 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.087122 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.111555 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:46 crc kubenswrapper[4730]: E0131 16:32:46.112358 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:46.612326446 +0000 UTC m=+153.418383362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.144522 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mgkkn"] Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.147405 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgkkn" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.171033 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgkkn"] Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.201115 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7c9rs" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.213134 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.213196 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zwjs\" (UniqueName: \"kubernetes.io/projected/61274bcb-156d-4bfd-806e-89500983ef42-kube-api-access-2zwjs\") pod \"redhat-marketplace-mgkkn\" (UID: \"61274bcb-156d-4bfd-806e-89500983ef42\") " pod="openshift-marketplace/redhat-marketplace-mgkkn" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.213248 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61274bcb-156d-4bfd-806e-89500983ef42-catalog-content\") pod \"redhat-marketplace-mgkkn\" (UID: \"61274bcb-156d-4bfd-806e-89500983ef42\") " pod="openshift-marketplace/redhat-marketplace-mgkkn" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.213279 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61274bcb-156d-4bfd-806e-89500983ef42-utilities\") pod \"redhat-marketplace-mgkkn\" (UID: \"61274bcb-156d-4bfd-806e-89500983ef42\") " pod="openshift-marketplace/redhat-marketplace-mgkkn" Jan 31 16:32:46 crc kubenswrapper[4730]: E0131 16:32:46.213562 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:46.713550432 +0000 UTC m=+153.519607348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.252339 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ggbf6" Jan 31 16:32:46 crc kubenswrapper[4730]: E0131 16:32:46.314518 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:46.814500669 +0000 UTC m=+153.620557585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.314542 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.314814 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61274bcb-156d-4bfd-806e-89500983ef42-catalog-content\") pod \"redhat-marketplace-mgkkn\" (UID: \"61274bcb-156d-4bfd-806e-89500983ef42\") " pod="openshift-marketplace/redhat-marketplace-mgkkn" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.314859 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61274bcb-156d-4bfd-806e-89500983ef42-utilities\") pod \"redhat-marketplace-mgkkn\" (UID: \"61274bcb-156d-4bfd-806e-89500983ef42\") " pod="openshift-marketplace/redhat-marketplace-mgkkn" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.314914 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.314936 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zwjs\" (UniqueName: \"kubernetes.io/projected/61274bcb-156d-4bfd-806e-89500983ef42-kube-api-access-2zwjs\") pod \"redhat-marketplace-mgkkn\" (UID: \"61274bcb-156d-4bfd-806e-89500983ef42\") " pod="openshift-marketplace/redhat-marketplace-mgkkn" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.315519 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61274bcb-156d-4bfd-806e-89500983ef42-catalog-content\") pod \"redhat-marketplace-mgkkn\" (UID: \"61274bcb-156d-4bfd-806e-89500983ef42\") " pod="openshift-marketplace/redhat-marketplace-mgkkn" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.316532 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61274bcb-156d-4bfd-806e-89500983ef42-utilities\") pod \"redhat-marketplace-mgkkn\" (UID: \"61274bcb-156d-4bfd-806e-89500983ef42\") " pod="openshift-marketplace/redhat-marketplace-mgkkn" Jan 31 16:32:46 crc kubenswrapper[4730]: E0131 16:32:46.316985 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:46.816977483 +0000 UTC m=+153.623034399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.341633 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zwjs\" (UniqueName: \"kubernetes.io/projected/61274bcb-156d-4bfd-806e-89500983ef42-kube-api-access-2zwjs\") pod \"redhat-marketplace-mgkkn\" (UID: \"61274bcb-156d-4bfd-806e-89500983ef42\") " pod="openshift-marketplace/redhat-marketplace-mgkkn" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.360561 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bcp4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.360617 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2bcp4" podUID="e8d1e83c-c1a5-4565-b1bc-454b416c6039" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.360571 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bcp4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.360665 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2bcp4" podUID="e8d1e83c-c1a5-4565-b1bc-454b416c6039" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.372969 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.373714 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.378777 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.379283 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.394701 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.416564 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.416919 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b536334-651e-4060-b6ea-1dd32c86b72a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7b536334-651e-4060-b6ea-1dd32c86b72a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.416961 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b536334-651e-4060-b6ea-1dd32c86b72a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7b536334-651e-4060-b6ea-1dd32c86b72a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 16:32:46 crc kubenswrapper[4730]: E0131 16:32:46.417068 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:46.917053325 +0000 UTC m=+153.723110241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.520151 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.520209 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b536334-651e-4060-b6ea-1dd32c86b72a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7b536334-651e-4060-b6ea-1dd32c86b72a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.520251 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b536334-651e-4060-b6ea-1dd32c86b72a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7b536334-651e-4060-b6ea-1dd32c86b72a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.520331 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b536334-651e-4060-b6ea-1dd32c86b72a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7b536334-651e-4060-b6ea-1dd32c86b72a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 16:32:46 crc kubenswrapper[4730]: E0131 16:32:46.520558 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:47.020540658 +0000 UTC m=+153.826597564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.535557 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f78ml"] Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.537056 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f78ml" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.555529 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgkkn" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.556081 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f78ml"] Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.556677 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.568857 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b536334-651e-4060-b6ea-1dd32c86b72a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7b536334-651e-4060-b6ea-1dd32c86b72a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.573904 4730 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.616874 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-28kdr" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.623304 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.623604 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjnb6\" (UniqueName: \"kubernetes.io/projected/01ab894a-0ddc-46a2-8027-96606aae9396-kube-api-access-zjnb6\") pod \"redhat-operators-f78ml\" (UID: \"01ab894a-0ddc-46a2-8027-96606aae9396\") " pod="openshift-marketplace/redhat-operators-f78ml" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.623697 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01ab894a-0ddc-46a2-8027-96606aae9396-utilities\") pod \"redhat-operators-f78ml\" (UID: \"01ab894a-0ddc-46a2-8027-96606aae9396\") " pod="openshift-marketplace/redhat-operators-f78ml" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.623740 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01ab894a-0ddc-46a2-8027-96606aae9396-catalog-content\") pod \"redhat-operators-f78ml\" (UID: \"01ab894a-0ddc-46a2-8027-96606aae9396\") " pod="openshift-marketplace/redhat-operators-f78ml" Jan 31 16:32:46 crc kubenswrapper[4730]: E0131 16:32:46.623857 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 16:32:47.123836806 +0000 UTC m=+153.929893722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.642208 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.642873 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.644092 4730 generic.go:334] "Generic (PLEG): container finished" podID="b61a61bd-3aaa-42b6-9681-2945b18462c2" containerID="ab5d64ae10400ba0b9491f8991adc5a601b3532bafc3e3e123b49da1929b68d9" exitCode=0 Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.644170 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" event={"ID":"b61a61bd-3aaa-42b6-9681-2945b18462c2","Type":"ContainerDied","Data":"ab5d64ae10400ba0b9491f8991adc5a601b3532bafc3e3e123b49da1929b68d9"} Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.644338 4730 patch_prober.go:28] interesting pod/console-f9d7485db-6v2xk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.644409 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6v2xk" podUID="8100d0f3-9c7f-4835-b98a-c79cc76c29ef" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.659538 4730 generic.go:334] "Generic (PLEG): container finished" podID="e8d7fc22-9a5c-4569-821d-c915ab1f5657" containerID="ebef7ca4aaf8d3200904173efd81eeea44d0eebd2daff37cf9274dfffdd53952" exitCode=0 Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.659615 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwsps" event={"ID":"e8d7fc22-9a5c-4569-821d-c915ab1f5657","Type":"ContainerDied","Data":"ebef7ca4aaf8d3200904173efd81eeea44d0eebd2daff37cf9274dfffdd53952"} Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.664953 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.664942 4730 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-31T16:32:46.573935049Z","Handler":null,"Name":""} Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.672366 4730 generic.go:334] "Generic (PLEG): container finished" podID="24e875c6-16c4-43f2-8533-7d1af60844fb" containerID="53499e1e0c0d1131208869970045fda7fe05eb66a38fb0c758f8c0c5fba4593a" exitCode=0 Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.672454 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jq8n" event={"ID":"24e875c6-16c4-43f2-8533-7d1af60844fb","Type":"ContainerDied","Data":"53499e1e0c0d1131208869970045fda7fe05eb66a38fb0c758f8c0c5fba4593a"} Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.672481 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jq8n" event={"ID":"24e875c6-16c4-43f2-8533-7d1af60844fb","Type":"ContainerStarted","Data":"cf25a575ca568bb81c5506910efc7f11b523b95e68ae18a96121e7e36e8def33"} Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.679400 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7c9rs"] Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.712178 4730 generic.go:334] "Generic (PLEG): container finished" podID="f77d01ac-b8b8-436b-9626-6230af5c95b7" containerID="13895c86ab26b246ebac13097f6d4cc6497130443460b48b7466cb89d95ba8ff" exitCode=0 Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.712272 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnkqr" event={"ID":"f77d01ac-b8b8-436b-9626-6230af5c95b7","Type":"ContainerDied","Data":"13895c86ab26b246ebac13097f6d4cc6497130443460b48b7466cb89d95ba8ff"} Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.712298 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnkqr" event={"ID":"f77d01ac-b8b8-436b-9626-6230af5c95b7","Type":"ContainerStarted","Data":"6d1a82d7b22a1bd4fbf1dd73b560e91eda767ee9595f038350dfb820b02f658b"} Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.716220 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qmmfc"] Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.716674 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.717286 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmmfc" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.724587 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.724860 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01ab894a-0ddc-46a2-8027-96606aae9396-utilities\") pod \"redhat-operators-f78ml\" (UID: \"01ab894a-0ddc-46a2-8027-96606aae9396\") " pod="openshift-marketplace/redhat-operators-f78ml" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.724997 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01ab894a-0ddc-46a2-8027-96606aae9396-catalog-content\") pod \"redhat-operators-f78ml\" (UID: \"01ab894a-0ddc-46a2-8027-96606aae9396\") " pod="openshift-marketplace/redhat-operators-f78ml" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.725117 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjnb6\" (UniqueName: \"kubernetes.io/projected/01ab894a-0ddc-46a2-8027-96606aae9396-kube-api-access-zjnb6\") pod \"redhat-operators-f78ml\" (UID: \"01ab894a-0ddc-46a2-8027-96606aae9396\") " pod="openshift-marketplace/redhat-operators-f78ml" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.726261 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01ab894a-0ddc-46a2-8027-96606aae9396-utilities\") pod \"redhat-operators-f78ml\" (UID: \"01ab894a-0ddc-46a2-8027-96606aae9396\") " pod="openshift-marketplace/redhat-operators-f78ml" Jan 31 16:32:46 crc kubenswrapper[4730]: E0131 16:32:46.727405 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 16:32:47.227390131 +0000 UTC m=+154.033447037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z6ftx" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.727754 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01ab894a-0ddc-46a2-8027-96606aae9396-catalog-content\") pod \"redhat-operators-f78ml\" (UID: \"01ab894a-0ddc-46a2-8027-96606aae9396\") " pod="openshift-marketplace/redhat-operators-f78ml" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.750236 4730 generic.go:334] "Generic (PLEG): container finished" podID="d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e" containerID="a80c35e32b6fc928276f7e4c629ad94f6f76169d07782f63fac86259340f55f6" exitCode=0 Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.750324 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px9cf" event={"ID":"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e","Type":"ContainerDied","Data":"a80c35e32b6fc928276f7e4c629ad94f6f76169d07782f63fac86259340f55f6"} Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.750349 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px9cf" event={"ID":"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e","Type":"ContainerStarted","Data":"679ea35fb3e7021eb136e79169a000925a0d66b50d75d8213ebb6baf515a9149"} Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.753699 4730 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.753719 4730 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.754187 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qmmfc"] Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.791183 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" event={"ID":"216a1f0f-785a-4dfa-b084-501b799637b7","Type":"ContainerStarted","Data":"1ed5d4a290c2d8975c1ff66d861335434ac9cbc873b3f1342aecd071ae7f9471"} Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.795906 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjnb6\" (UniqueName: \"kubernetes.io/projected/01ab894a-0ddc-46a2-8027-96606aae9396-kube-api-access-zjnb6\") pod \"redhat-operators-f78ml\" (UID: \"01ab894a-0ddc-46a2-8027-96606aae9396\") " pod="openshift-marketplace/redhat-operators-f78ml" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.817952 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tj4cc" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.825929 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.826229 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b701a69-5acf-4822-a395-e35001c38825-utilities\") pod \"redhat-operators-qmmfc\" (UID: \"0b701a69-5acf-4822-a395-e35001c38825\") " pod="openshift-marketplace/redhat-operators-qmmfc" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.826280 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b701a69-5acf-4822-a395-e35001c38825-catalog-content\") pod \"redhat-operators-qmmfc\" (UID: \"0b701a69-5acf-4822-a395-e35001c38825\") " pod="openshift-marketplace/redhat-operators-qmmfc" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.826417 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7q78\" (UniqueName: \"kubernetes.io/projected/0b701a69-5acf-4822-a395-e35001c38825-kube-api-access-r7q78\") pod \"redhat-operators-qmmfc\" (UID: \"0b701a69-5acf-4822-a395-e35001c38825\") " pod="openshift-marketplace/redhat-operators-qmmfc" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.838655 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.913845 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f78ml" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.928215 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7q78\" (UniqueName: \"kubernetes.io/projected/0b701a69-5acf-4822-a395-e35001c38825-kube-api-access-r7q78\") pod \"redhat-operators-qmmfc\" (UID: \"0b701a69-5acf-4822-a395-e35001c38825\") " pod="openshift-marketplace/redhat-operators-qmmfc" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.928254 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.928309 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b701a69-5acf-4822-a395-e35001c38825-utilities\") pod \"redhat-operators-qmmfc\" (UID: \"0b701a69-5acf-4822-a395-e35001c38825\") " pod="openshift-marketplace/redhat-operators-qmmfc" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.928328 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b701a69-5acf-4822-a395-e35001c38825-catalog-content\") pod \"redhat-operators-qmmfc\" (UID: \"0b701a69-5acf-4822-a395-e35001c38825\") " pod="openshift-marketplace/redhat-operators-qmmfc" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.928813 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b701a69-5acf-4822-a395-e35001c38825-catalog-content\") pod \"redhat-operators-qmmfc\" (UID: \"0b701a69-5acf-4822-a395-e35001c38825\") " pod="openshift-marketplace/redhat-operators-qmmfc" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.929509 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b701a69-5acf-4822-a395-e35001c38825-utilities\") pod \"redhat-operators-qmmfc\" (UID: \"0b701a69-5acf-4822-a395-e35001c38825\") " pod="openshift-marketplace/redhat-operators-qmmfc" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.970962 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7q78\" (UniqueName: \"kubernetes.io/projected/0b701a69-5acf-4822-a395-e35001c38825-kube-api-access-r7q78\") pod \"redhat-operators-qmmfc\" (UID: \"0b701a69-5acf-4822-a395-e35001c38825\") " pod="openshift-marketplace/redhat-operators-qmmfc" Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.972135 4730 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 16:32:46 crc kubenswrapper[4730]: I0131 16:32:46.972167 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.090791 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gj2x5" podStartSLOduration=13.090766507 podStartE2EDuration="13.090766507s" podCreationTimestamp="2026-01-31 16:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:47.04823185 +0000 UTC m=+153.854288766" watchObservedRunningTime="2026-01-31 16:32:47.090766507 +0000 UTC m=+153.896823413" Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.095178 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.125210 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.127697 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmmfc" Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.140184 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:47 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:47 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:47 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.140240 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.407724 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z6ftx\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.434852 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgkkn"] Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.455144 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.523123 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.675633 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f78ml"] Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.832108 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qmmfc"] Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.910574 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f78ml" event={"ID":"01ab894a-0ddc-46a2-8027-96606aae9396","Type":"ContainerStarted","Data":"039ad1a66e08c2af06ea29cc46d20edfe817d2205104e959c1c1d02455a4ce3e"} Jan 31 16:32:47 crc kubenswrapper[4730]: W0131 16:32:47.936891 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b701a69_5acf_4822_a395_e35001c38825.slice/crio-2b0c6bd2d86f4b9d584ea79f218d4f62973a5d067ac732dda1a0e91aa5407d4e WatchSource:0}: Error finding container 2b0c6bd2d86f4b9d584ea79f218d4f62973a5d067ac732dda1a0e91aa5407d4e: Status 404 returned error can't find the container with id 2b0c6bd2d86f4b9d584ea79f218d4f62973a5d067ac732dda1a0e91aa5407d4e Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.942046 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgkkn" event={"ID":"61274bcb-156d-4bfd-806e-89500983ef42","Type":"ContainerStarted","Data":"72605fab90cebc68424d4ab0945d7a4bff5cb2ece43d98361867af95fcfc79e6"} Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.962160 4730 generic.go:334] "Generic (PLEG): container finished" podID="d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5" containerID="7990422ccd48ba26ecc342157e642f4a11a0e350d81cf9bb4b5a57920d725d8c" exitCode=0 Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.962488 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c9rs" event={"ID":"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5","Type":"ContainerDied","Data":"7990422ccd48ba26ecc342157e642f4a11a0e350d81cf9bb4b5a57920d725d8c"} Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.962545 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c9rs" event={"ID":"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5","Type":"ContainerStarted","Data":"5d66e1f48b35199b1abb4265e3e43f1c2709d8d39c3821d9cdab75e6118d1737"} Jan 31 16:32:47 crc kubenswrapper[4730]: I0131 16:32:47.986688 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7b536334-651e-4060-b6ea-1dd32c86b72a","Type":"ContainerStarted","Data":"2c572a4fa9f4c7b63a2c3afdb62c9b725e326f7f97e5f301448b95fbc421a813"} Jan 31 16:32:48 crc kubenswrapper[4730]: I0131 16:32:48.051436 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z6ftx"] Jan 31 16:32:48 crc kubenswrapper[4730]: I0131 16:32:48.081727 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:48 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:48 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:48 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:48 crc kubenswrapper[4730]: I0131 16:32:48.082087 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:48 crc kubenswrapper[4730]: I0131 16:32:48.478731 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 31 16:32:48 crc kubenswrapper[4730]: I0131 16:32:48.517702 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" Jan 31 16:32:48 crc kubenswrapper[4730]: I0131 16:32:48.570414 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b61a61bd-3aaa-42b6-9681-2945b18462c2-config-volume\") pod \"b61a61bd-3aaa-42b6-9681-2945b18462c2\" (UID: \"b61a61bd-3aaa-42b6-9681-2945b18462c2\") " Jan 31 16:32:48 crc kubenswrapper[4730]: I0131 16:32:48.570510 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b61a61bd-3aaa-42b6-9681-2945b18462c2-secret-volume\") pod \"b61a61bd-3aaa-42b6-9681-2945b18462c2\" (UID: \"b61a61bd-3aaa-42b6-9681-2945b18462c2\") " Jan 31 16:32:48 crc kubenswrapper[4730]: I0131 16:32:48.570708 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frn4n\" (UniqueName: \"kubernetes.io/projected/b61a61bd-3aaa-42b6-9681-2945b18462c2-kube-api-access-frn4n\") pod \"b61a61bd-3aaa-42b6-9681-2945b18462c2\" (UID: \"b61a61bd-3aaa-42b6-9681-2945b18462c2\") " Jan 31 16:32:48 crc kubenswrapper[4730]: I0131 16:32:48.571589 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b61a61bd-3aaa-42b6-9681-2945b18462c2-config-volume" (OuterVolumeSpecName: "config-volume") pod "b61a61bd-3aaa-42b6-9681-2945b18462c2" (UID: "b61a61bd-3aaa-42b6-9681-2945b18462c2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:32:48 crc kubenswrapper[4730]: I0131 16:32:48.578664 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b61a61bd-3aaa-42b6-9681-2945b18462c2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b61a61bd-3aaa-42b6-9681-2945b18462c2" (UID: "b61a61bd-3aaa-42b6-9681-2945b18462c2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:32:48 crc kubenswrapper[4730]: I0131 16:32:48.580150 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b61a61bd-3aaa-42b6-9681-2945b18462c2-kube-api-access-frn4n" (OuterVolumeSpecName: "kube-api-access-frn4n") pod "b61a61bd-3aaa-42b6-9681-2945b18462c2" (UID: "b61a61bd-3aaa-42b6-9681-2945b18462c2"). InnerVolumeSpecName "kube-api-access-frn4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:32:48 crc kubenswrapper[4730]: I0131 16:32:48.675769 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frn4n\" (UniqueName: \"kubernetes.io/projected/b61a61bd-3aaa-42b6-9681-2945b18462c2-kube-api-access-frn4n\") on node \"crc\" DevicePath \"\"" Jan 31 16:32:48 crc kubenswrapper[4730]: I0131 16:32:48.675907 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b61a61bd-3aaa-42b6-9681-2945b18462c2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 16:32:48 crc kubenswrapper[4730]: I0131 16:32:48.675926 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b61a61bd-3aaa-42b6-9681-2945b18462c2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.016207 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" event={"ID":"0d504518-949c-45ca-8fc7-2f7e1d00f611","Type":"ContainerStarted","Data":"ea8f96ce435b034a98bfab043ce851d62f0b576553c746375c2c343ea7269cd8"} Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.016264 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" event={"ID":"0d504518-949c-45ca-8fc7-2f7e1d00f611","Type":"ContainerStarted","Data":"e264a3e695444659a52ff79fe750e481e82e20561fecac66a4a38e4fda504e80"} Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.016309 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.020194 4730 generic.go:334] "Generic (PLEG): container finished" podID="0b701a69-5acf-4822-a395-e35001c38825" containerID="134291934dc6ea76648169a8c39216338c82816189c11eb7687f4d40e2d9df30" exitCode=0 Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.020240 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmmfc" event={"ID":"0b701a69-5acf-4822-a395-e35001c38825","Type":"ContainerDied","Data":"134291934dc6ea76648169a8c39216338c82816189c11eb7687f4d40e2d9df30"} Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.020258 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmmfc" event={"ID":"0b701a69-5acf-4822-a395-e35001c38825","Type":"ContainerStarted","Data":"2b0c6bd2d86f4b9d584ea79f218d4f62973a5d067ac732dda1a0e91aa5407d4e"} Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.035146 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7b536334-651e-4060-b6ea-1dd32c86b72a","Type":"ContainerStarted","Data":"318a07aaaee613dcca18de9c5b29bc710f276dd5ca100bb71ef9d526739e264b"} Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.044707 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" podStartSLOduration=134.044685831 podStartE2EDuration="2m14.044685831s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:49.040599619 +0000 UTC m=+155.846656535" watchObservedRunningTime="2026-01-31 16:32:49.044685831 +0000 UTC m=+155.850742757" Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.045820 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" event={"ID":"b61a61bd-3aaa-42b6-9681-2945b18462c2","Type":"ContainerDied","Data":"5572516bea8b91813f7e0ae490bcc32c4fe309631d8ca91fba4b806d1c108fb3"} Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.045850 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5572516bea8b91813f7e0ae490bcc32c4fe309631d8ca91fba4b806d1c108fb3" Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.045896 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl" Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.058476 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.058460452 podStartE2EDuration="3.058460452s" podCreationTimestamp="2026-01-31 16:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:49.053877035 +0000 UTC m=+155.859933951" watchObservedRunningTime="2026-01-31 16:32:49.058460452 +0000 UTC m=+155.864517368" Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.065555 4730 generic.go:334] "Generic (PLEG): container finished" podID="01ab894a-0ddc-46a2-8027-96606aae9396" containerID="19b03747fb64a8bc841e0bee5a8e92853434d16e6f42718988c1d4dd5a296596" exitCode=0 Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.065645 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f78ml" event={"ID":"01ab894a-0ddc-46a2-8027-96606aae9396","Type":"ContainerDied","Data":"19b03747fb64a8bc841e0bee5a8e92853434d16e6f42718988c1d4dd5a296596"} Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.078940 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:49 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:49 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:49 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.078985 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.095698 4730 generic.go:334] "Generic (PLEG): container finished" podID="61274bcb-156d-4bfd-806e-89500983ef42" containerID="a1fffc603cc97c82ce4221d42fcb5f6604c3caf80e515868d243d20e02b88d14" exitCode=0 Jan 31 16:32:49 crc kubenswrapper[4730]: I0131 16:32:49.096334 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgkkn" event={"ID":"61274bcb-156d-4bfd-806e-89500983ef42","Type":"ContainerDied","Data":"a1fffc603cc97c82ce4221d42fcb5f6604c3caf80e515868d243d20e02b88d14"} Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.078476 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:50 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:50 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:50 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.078791 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.136586 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 16:32:50 crc kubenswrapper[4730]: E0131 16:32:50.136794 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b61a61bd-3aaa-42b6-9681-2945b18462c2" containerName="collect-profiles" Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.136821 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b61a61bd-3aaa-42b6-9681-2945b18462c2" containerName="collect-profiles" Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.136930 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="b61a61bd-3aaa-42b6-9681-2945b18462c2" containerName="collect-profiles" Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.137275 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.141428 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.143038 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.151143 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.156649 4730 generic.go:334] "Generic (PLEG): container finished" podID="7b536334-651e-4060-b6ea-1dd32c86b72a" containerID="318a07aaaee613dcca18de9c5b29bc710f276dd5ca100bb71ef9d526739e264b" exitCode=0 Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.157383 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7b536334-651e-4060-b6ea-1dd32c86b72a","Type":"ContainerDied","Data":"318a07aaaee613dcca18de9c5b29bc710f276dd5ca100bb71ef9d526739e264b"} Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.252275 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ba07778-ff6c-49f7-931e-58afbf8b7136-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3ba07778-ff6c-49f7-931e-58afbf8b7136\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.252391 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ba07778-ff6c-49f7-931e-58afbf8b7136-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3ba07778-ff6c-49f7-931e-58afbf8b7136\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.353008 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ba07778-ff6c-49f7-931e-58afbf8b7136-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3ba07778-ff6c-49f7-931e-58afbf8b7136\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.353083 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ba07778-ff6c-49f7-931e-58afbf8b7136-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3ba07778-ff6c-49f7-931e-58afbf8b7136\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.353116 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ba07778-ff6c-49f7-931e-58afbf8b7136-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3ba07778-ff6c-49f7-931e-58afbf8b7136\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.376072 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ba07778-ff6c-49f7-931e-58afbf8b7136-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3ba07778-ff6c-49f7-931e-58afbf8b7136\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.481289 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.624664 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:50 crc kubenswrapper[4730]: I0131 16:32:50.633162 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-4cfvt" Jan 31 16:32:51 crc kubenswrapper[4730]: I0131 16:32:51.078140 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:51 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:51 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:51 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:51 crc kubenswrapper[4730]: I0131 16:32:51.078374 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:51 crc kubenswrapper[4730]: I0131 16:32:51.667445 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 16:32:51 crc kubenswrapper[4730]: I0131 16:32:51.791019 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 16:32:51 crc kubenswrapper[4730]: I0131 16:32:51.830733 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5vcn4" Jan 31 16:32:51 crc kubenswrapper[4730]: I0131 16:32:51.888012 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b536334-651e-4060-b6ea-1dd32c86b72a-kubelet-dir\") pod \"7b536334-651e-4060-b6ea-1dd32c86b72a\" (UID: \"7b536334-651e-4060-b6ea-1dd32c86b72a\") " Jan 31 16:32:51 crc kubenswrapper[4730]: I0131 16:32:51.888062 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b536334-651e-4060-b6ea-1dd32c86b72a-kube-api-access\") pod \"7b536334-651e-4060-b6ea-1dd32c86b72a\" (UID: \"7b536334-651e-4060-b6ea-1dd32c86b72a\") " Jan 31 16:32:51 crc kubenswrapper[4730]: I0131 16:32:51.888981 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b536334-651e-4060-b6ea-1dd32c86b72a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7b536334-651e-4060-b6ea-1dd32c86b72a" (UID: "7b536334-651e-4060-b6ea-1dd32c86b72a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:32:51 crc kubenswrapper[4730]: I0131 16:32:51.897596 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b536334-651e-4060-b6ea-1dd32c86b72a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7b536334-651e-4060-b6ea-1dd32c86b72a" (UID: "7b536334-651e-4060-b6ea-1dd32c86b72a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:32:51 crc kubenswrapper[4730]: I0131 16:32:51.988991 4730 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b536334-651e-4060-b6ea-1dd32c86b72a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 16:32:51 crc kubenswrapper[4730]: I0131 16:32:51.989025 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b536334-651e-4060-b6ea-1dd32c86b72a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 16:32:52 crc kubenswrapper[4730]: I0131 16:32:52.080562 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:52 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:52 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:52 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:52 crc kubenswrapper[4730]: I0131 16:32:52.080644 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:52 crc kubenswrapper[4730]: I0131 16:32:52.193918 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3ba07778-ff6c-49f7-931e-58afbf8b7136","Type":"ContainerStarted","Data":"e6b9f304941f215893e51f290f4b471a3ffe90ac02ab3c63d27f7f4f86f302a2"} Jan 31 16:32:52 crc kubenswrapper[4730]: I0131 16:32:52.208114 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7b536334-651e-4060-b6ea-1dd32c86b72a","Type":"ContainerDied","Data":"2c572a4fa9f4c7b63a2c3afdb62c9b725e326f7f97e5f301448b95fbc421a813"} Jan 31 16:32:52 crc kubenswrapper[4730]: I0131 16:32:52.208151 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c572a4fa9f4c7b63a2c3afdb62c9b725e326f7f97e5f301448b95fbc421a813" Jan 31 16:32:52 crc kubenswrapper[4730]: I0131 16:32:52.208231 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 16:32:53 crc kubenswrapper[4730]: I0131 16:32:53.080337 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:53 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:53 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:53 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:53 crc kubenswrapper[4730]: I0131 16:32:53.080996 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:53 crc kubenswrapper[4730]: I0131 16:32:53.230213 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3ba07778-ff6c-49f7-931e-58afbf8b7136","Type":"ContainerStarted","Data":"7294a5e7953e1d4eddcbc41e44a8f5ac43460e281f5f0998072cba75343cefe4"} Jan 31 16:32:53 crc kubenswrapper[4730]: I0131 16:32:53.251641 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.251610838 podStartE2EDuration="3.251610838s" podCreationTimestamp="2026-01-31 16:32:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:32:53.245200757 +0000 UTC m=+160.051257683" watchObservedRunningTime="2026-01-31 16:32:53.251610838 +0000 UTC m=+160.057667754" Jan 31 16:32:54 crc kubenswrapper[4730]: I0131 16:32:54.078833 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:54 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:54 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:54 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:54 crc kubenswrapper[4730]: I0131 16:32:54.078901 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:54 crc kubenswrapper[4730]: I0131 16:32:54.269116 4730 generic.go:334] "Generic (PLEG): container finished" podID="3ba07778-ff6c-49f7-931e-58afbf8b7136" containerID="7294a5e7953e1d4eddcbc41e44a8f5ac43460e281f5f0998072cba75343cefe4" exitCode=0 Jan 31 16:32:54 crc kubenswrapper[4730]: I0131 16:32:54.269238 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3ba07778-ff6c-49f7-931e-58afbf8b7136","Type":"ContainerDied","Data":"7294a5e7953e1d4eddcbc41e44a8f5ac43460e281f5f0998072cba75343cefe4"} Jan 31 16:32:55 crc kubenswrapper[4730]: I0131 16:32:55.077564 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:55 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:55 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:55 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:55 crc kubenswrapper[4730]: I0131 16:32:55.077652 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:55 crc kubenswrapper[4730]: I0131 16:32:55.715687 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 16:32:55 crc kubenswrapper[4730]: I0131 16:32:55.874884 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ba07778-ff6c-49f7-931e-58afbf8b7136-kubelet-dir\") pod \"3ba07778-ff6c-49f7-931e-58afbf8b7136\" (UID: \"3ba07778-ff6c-49f7-931e-58afbf8b7136\") " Jan 31 16:32:55 crc kubenswrapper[4730]: I0131 16:32:55.874967 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ba07778-ff6c-49f7-931e-58afbf8b7136-kube-api-access\") pod \"3ba07778-ff6c-49f7-931e-58afbf8b7136\" (UID: \"3ba07778-ff6c-49f7-931e-58afbf8b7136\") " Jan 31 16:32:55 crc kubenswrapper[4730]: I0131 16:32:55.875186 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ba07778-ff6c-49f7-931e-58afbf8b7136-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3ba07778-ff6c-49f7-931e-58afbf8b7136" (UID: "3ba07778-ff6c-49f7-931e-58afbf8b7136"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:32:55 crc kubenswrapper[4730]: I0131 16:32:55.880594 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba07778-ff6c-49f7-931e-58afbf8b7136-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3ba07778-ff6c-49f7-931e-58afbf8b7136" (UID: "3ba07778-ff6c-49f7-931e-58afbf8b7136"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:32:55 crc kubenswrapper[4730]: I0131 16:32:55.976034 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ba07778-ff6c-49f7-931e-58afbf8b7136-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 16:32:55 crc kubenswrapper[4730]: I0131 16:32:55.976072 4730 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ba07778-ff6c-49f7-931e-58afbf8b7136-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 16:32:56 crc kubenswrapper[4730]: I0131 16:32:56.079317 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:56 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:56 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:56 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:56 crc kubenswrapper[4730]: I0131 16:32:56.079367 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:56 crc kubenswrapper[4730]: I0131 16:32:56.322721 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3ba07778-ff6c-49f7-931e-58afbf8b7136","Type":"ContainerDied","Data":"e6b9f304941f215893e51f290f4b471a3ffe90ac02ab3c63d27f7f4f86f302a2"} Jan 31 16:32:56 crc kubenswrapper[4730]: I0131 16:32:56.322768 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6b9f304941f215893e51f290f4b471a3ffe90ac02ab3c63d27f7f4f86f302a2" Jan 31 16:32:56 crc kubenswrapper[4730]: I0131 16:32:56.322858 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 16:32:56 crc kubenswrapper[4730]: I0131 16:32:56.360674 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bcp4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 31 16:32:56 crc kubenswrapper[4730]: I0131 16:32:56.360723 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2bcp4" podUID="e8d1e83c-c1a5-4565-b1bc-454b416c6039" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 31 16:32:56 crc kubenswrapper[4730]: I0131 16:32:56.360795 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bcp4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 31 16:32:56 crc kubenswrapper[4730]: I0131 16:32:56.360860 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2bcp4" podUID="e8d1e83c-c1a5-4565-b1bc-454b416c6039" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 31 16:32:56 crc kubenswrapper[4730]: I0131 16:32:56.642715 4730 patch_prober.go:28] interesting pod/console-f9d7485db-6v2xk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 31 16:32:56 crc kubenswrapper[4730]: I0131 16:32:56.642931 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6v2xk" podUID="8100d0f3-9c7f-4835-b98a-c79cc76c29ef" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 31 16:32:56 crc kubenswrapper[4730]: I0131 16:32:56.975201 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:32:56 crc kubenswrapper[4730]: I0131 16:32:56.975270 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:32:57 crc kubenswrapper[4730]: I0131 16:32:57.077699 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:57 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:57 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:57 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:57 crc kubenswrapper[4730]: I0131 16:32:57.077754 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:57 crc kubenswrapper[4730]: I0131 16:32:57.909432 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs\") pod \"network-metrics-daemon-sg8lw\" (UID: \"39ef74a4-f27d-498b-8bbd-aae64590d030\") " pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:57 crc kubenswrapper[4730]: I0131 16:32:57.921115 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39ef74a4-f27d-498b-8bbd-aae64590d030-metrics-certs\") pod \"network-metrics-daemon-sg8lw\" (UID: \"39ef74a4-f27d-498b-8bbd-aae64590d030\") " pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:58 crc kubenswrapper[4730]: I0131 16:32:58.078852 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:58 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:58 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:58 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:58 crc kubenswrapper[4730]: I0131 16:32:58.078920 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:32:58 crc kubenswrapper[4730]: I0131 16:32:58.193006 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sg8lw" Jan 31 16:32:59 crc kubenswrapper[4730]: I0131 16:32:59.078498 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:32:59 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:32:59 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:32:59 crc kubenswrapper[4730]: healthz check failed Jan 31 16:32:59 crc kubenswrapper[4730]: I0131 16:32:59.078548 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:33:00 crc kubenswrapper[4730]: I0131 16:33:00.077610 4730 patch_prober.go:28] interesting pod/router-default-5444994796-jwc2k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 16:33:00 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Jan 31 16:33:00 crc kubenswrapper[4730]: [+]process-running ok Jan 31 16:33:00 crc kubenswrapper[4730]: healthz check failed Jan 31 16:33:00 crc kubenswrapper[4730]: I0131 16:33:00.077660 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwc2k" podUID="887bb6af-277c-4837-b71a-6a94d0eb2edf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:33:01 crc kubenswrapper[4730]: I0131 16:33:01.078617 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:33:01 crc kubenswrapper[4730]: I0131 16:33:01.080783 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-jwc2k" Jan 31 16:33:06 crc kubenswrapper[4730]: I0131 16:33:06.371905 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2bcp4" Jan 31 16:33:06 crc kubenswrapper[4730]: I0131 16:33:06.647285 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:33:06 crc kubenswrapper[4730]: I0131 16:33:06.652794 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:33:07 crc kubenswrapper[4730]: I0131 16:33:07.466548 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:33:16 crc kubenswrapper[4730]: I0131 16:33:16.737748 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bzzjv" Jan 31 16:33:22 crc kubenswrapper[4730]: I0131 16:33:22.612171 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 16:33:22 crc kubenswrapper[4730]: E0131 16:33:22.852651 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 16:33:22 crc kubenswrapper[4730]: E0131 16:33:22.853120 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5npw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vnkqr_openshift-marketplace(f77d01ac-b8b8-436b-9626-6230af5c95b7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 16:33:22 crc kubenswrapper[4730]: E0131 16:33:22.854397 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vnkqr" podUID="f77d01ac-b8b8-436b-9626-6230af5c95b7" Jan 31 16:33:23 crc kubenswrapper[4730]: I0131 16:33:23.481279 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sg8lw"] Jan 31 16:33:23 crc kubenswrapper[4730]: I0131 16:33:23.497201 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" event={"ID":"39ef74a4-f27d-498b-8bbd-aae64590d030","Type":"ContainerStarted","Data":"46f055621bab690141d01e29207baa25b7b505cda8fad7367e3bc1fe6ad04164"} Jan 31 16:33:23 crc kubenswrapper[4730]: I0131 16:33:23.499992 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmmfc" event={"ID":"0b701a69-5acf-4822-a395-e35001c38825","Type":"ContainerStarted","Data":"ae1a268236c4ede016154d5bd3d4e4f73f549aeb4cd6445181f76123e2d2b184"} Jan 31 16:33:23 crc kubenswrapper[4730]: I0131 16:33:23.502877 4730 generic.go:334] "Generic (PLEG): container finished" podID="d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5" containerID="226f342f66e47c29052b85ee0c6030ecdee8e1ed74145179cbe853c0d7b6b086" exitCode=0 Jan 31 16:33:23 crc kubenswrapper[4730]: I0131 16:33:23.502950 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c9rs" event={"ID":"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5","Type":"ContainerDied","Data":"226f342f66e47c29052b85ee0c6030ecdee8e1ed74145179cbe853c0d7b6b086"} Jan 31 16:33:23 crc kubenswrapper[4730]: I0131 16:33:23.504482 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px9cf" event={"ID":"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e","Type":"ContainerStarted","Data":"2dc2ca946e60aa19a45af9a7066555c68592eaf7b9ea1d7eca63f9c84a1023df"} Jan 31 16:33:23 crc kubenswrapper[4730]: I0131 16:33:23.507570 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f78ml" event={"ID":"01ab894a-0ddc-46a2-8027-96606aae9396","Type":"ContainerStarted","Data":"c8daf890bc4db88ca69f9c6cf587e58d35d938200fa3a8b3a06a4c49965be31e"} Jan 31 16:33:23 crc kubenswrapper[4730]: I0131 16:33:23.514230 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgkkn" event={"ID":"61274bcb-156d-4bfd-806e-89500983ef42","Type":"ContainerStarted","Data":"355ce229d1360b1e932b41baa35123cbc318d0f42792243c2238deb32a82fc70"} Jan 31 16:33:23 crc kubenswrapper[4730]: I0131 16:33:23.517111 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwsps" event={"ID":"e8d7fc22-9a5c-4569-821d-c915ab1f5657","Type":"ContainerStarted","Data":"9789310710cda53b84f34b92a43677cb32d494eceb5885cbc5f2e9f939a666f4"} Jan 31 16:33:23 crc kubenswrapper[4730]: I0131 16:33:23.521504 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jq8n" event={"ID":"24e875c6-16c4-43f2-8533-7d1af60844fb","Type":"ContainerStarted","Data":"8133a08573e0dc163fdd5c5f23d95603b3f362d9dfab955c0dcfbee9b95e017a"} Jan 31 16:33:23 crc kubenswrapper[4730]: E0131 16:33:23.524032 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vnkqr" podUID="f77d01ac-b8b8-436b-9626-6230af5c95b7" Jan 31 16:33:24 crc kubenswrapper[4730]: I0131 16:33:24.530951 4730 generic.go:334] "Generic (PLEG): container finished" podID="24e875c6-16c4-43f2-8533-7d1af60844fb" containerID="8133a08573e0dc163fdd5c5f23d95603b3f362d9dfab955c0dcfbee9b95e017a" exitCode=0 Jan 31 16:33:24 crc kubenswrapper[4730]: I0131 16:33:24.531157 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jq8n" event={"ID":"24e875c6-16c4-43f2-8533-7d1af60844fb","Type":"ContainerDied","Data":"8133a08573e0dc163fdd5c5f23d95603b3f362d9dfab955c0dcfbee9b95e017a"} Jan 31 16:33:24 crc kubenswrapper[4730]: I0131 16:33:24.534435 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" event={"ID":"39ef74a4-f27d-498b-8bbd-aae64590d030","Type":"ContainerStarted","Data":"c92d95c7636f25b50789d8a76ce682bd22d9fb5d354525db744b00d2fe294607"} Jan 31 16:33:24 crc kubenswrapper[4730]: I0131 16:33:24.534477 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sg8lw" event={"ID":"39ef74a4-f27d-498b-8bbd-aae64590d030","Type":"ContainerStarted","Data":"01ff7203f62ec2183dc849b4f521ce3ac3dc1ba0390e386b892b2db9bf79c056"} Jan 31 16:33:24 crc kubenswrapper[4730]: I0131 16:33:24.537302 4730 generic.go:334] "Generic (PLEG): container finished" podID="d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e" containerID="2dc2ca946e60aa19a45af9a7066555c68592eaf7b9ea1d7eca63f9c84a1023df" exitCode=0 Jan 31 16:33:24 crc kubenswrapper[4730]: I0131 16:33:24.537365 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px9cf" event={"ID":"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e","Type":"ContainerDied","Data":"2dc2ca946e60aa19a45af9a7066555c68592eaf7b9ea1d7eca63f9c84a1023df"} Jan 31 16:33:24 crc kubenswrapper[4730]: I0131 16:33:24.542489 4730 generic.go:334] "Generic (PLEG): container finished" podID="61274bcb-156d-4bfd-806e-89500983ef42" containerID="355ce229d1360b1e932b41baa35123cbc318d0f42792243c2238deb32a82fc70" exitCode=0 Jan 31 16:33:24 crc kubenswrapper[4730]: I0131 16:33:24.542555 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgkkn" event={"ID":"61274bcb-156d-4bfd-806e-89500983ef42","Type":"ContainerDied","Data":"355ce229d1360b1e932b41baa35123cbc318d0f42792243c2238deb32a82fc70"} Jan 31 16:33:24 crc kubenswrapper[4730]: I0131 16:33:24.546252 4730 generic.go:334] "Generic (PLEG): container finished" podID="e8d7fc22-9a5c-4569-821d-c915ab1f5657" containerID="9789310710cda53b84f34b92a43677cb32d494eceb5885cbc5f2e9f939a666f4" exitCode=0 Jan 31 16:33:24 crc kubenswrapper[4730]: I0131 16:33:24.546459 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwsps" event={"ID":"e8d7fc22-9a5c-4569-821d-c915ab1f5657","Type":"ContainerDied","Data":"9789310710cda53b84f34b92a43677cb32d494eceb5885cbc5f2e9f939a666f4"} Jan 31 16:33:25 crc kubenswrapper[4730]: I0131 16:33:25.554167 4730 generic.go:334] "Generic (PLEG): container finished" podID="0b701a69-5acf-4822-a395-e35001c38825" containerID="ae1a268236c4ede016154d5bd3d4e4f73f549aeb4cd6445181f76123e2d2b184" exitCode=0 Jan 31 16:33:25 crc kubenswrapper[4730]: I0131 16:33:25.554207 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmmfc" event={"ID":"0b701a69-5acf-4822-a395-e35001c38825","Type":"ContainerDied","Data":"ae1a268236c4ede016154d5bd3d4e4f73f549aeb4cd6445181f76123e2d2b184"} Jan 31 16:33:25 crc kubenswrapper[4730]: I0131 16:33:25.556235 4730 generic.go:334] "Generic (PLEG): container finished" podID="01ab894a-0ddc-46a2-8027-96606aae9396" containerID="c8daf890bc4db88ca69f9c6cf587e58d35d938200fa3a8b3a06a4c49965be31e" exitCode=0 Jan 31 16:33:25 crc kubenswrapper[4730]: I0131 16:33:25.556300 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f78ml" event={"ID":"01ab894a-0ddc-46a2-8027-96606aae9396","Type":"ContainerDied","Data":"c8daf890bc4db88ca69f9c6cf587e58d35d938200fa3a8b3a06a4c49965be31e"} Jan 31 16:33:25 crc kubenswrapper[4730]: I0131 16:33:25.653836 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-sg8lw" podStartSLOduration=170.653816854 podStartE2EDuration="2m50.653816854s" podCreationTimestamp="2026-01-31 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:33:25.651601958 +0000 UTC m=+192.457658874" watchObservedRunningTime="2026-01-31 16:33:25.653816854 +0000 UTC m=+192.459873770" Jan 31 16:33:25 crc kubenswrapper[4730]: I0131 16:33:25.705648 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5kjkn"] Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.565296 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c9rs" event={"ID":"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5","Type":"ContainerStarted","Data":"b8f1fb615808aa7970b58b617b72a2308b86231fb8a13a9fcce83526c6b6c0f2"} Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.583875 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7c9rs" podStartSLOduration=3.864797332 podStartE2EDuration="41.583857943s" podCreationTimestamp="2026-01-31 16:32:45 +0000 UTC" firstStartedPulling="2026-01-31 16:32:47.9695841 +0000 UTC m=+154.775641016" lastFinishedPulling="2026-01-31 16:33:25.688644711 +0000 UTC m=+192.494701627" observedRunningTime="2026-01-31 16:33:26.582203734 +0000 UTC m=+193.388260660" watchObservedRunningTime="2026-01-31 16:33:26.583857943 +0000 UTC m=+193.389914859" Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.731848 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 16:33:26 crc kubenswrapper[4730]: E0131 16:33:26.732126 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba07778-ff6c-49f7-931e-58afbf8b7136" containerName="pruner" Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.732142 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba07778-ff6c-49f7-931e-58afbf8b7136" containerName="pruner" Jan 31 16:33:26 crc kubenswrapper[4730]: E0131 16:33:26.732173 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b536334-651e-4060-b6ea-1dd32c86b72a" containerName="pruner" Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.732182 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b536334-651e-4060-b6ea-1dd32c86b72a" containerName="pruner" Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.732296 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b536334-651e-4060-b6ea-1dd32c86b72a" containerName="pruner" Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.732313 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba07778-ff6c-49f7-931e-58afbf8b7136" containerName="pruner" Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.732737 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.735472 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.735649 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.752078 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.883026 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/891ea3c8-6fa0-4fa5-bfec-295b77622f8e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"891ea3c8-6fa0-4fa5-bfec-295b77622f8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.883306 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/891ea3c8-6fa0-4fa5-bfec-295b77622f8e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"891ea3c8-6fa0-4fa5-bfec-295b77622f8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.979106 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.979160 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.984164 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/891ea3c8-6fa0-4fa5-bfec-295b77622f8e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"891ea3c8-6fa0-4fa5-bfec-295b77622f8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.984232 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/891ea3c8-6fa0-4fa5-bfec-295b77622f8e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"891ea3c8-6fa0-4fa5-bfec-295b77622f8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 16:33:26 crc kubenswrapper[4730]: I0131 16:33:26.984299 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/891ea3c8-6fa0-4fa5-bfec-295b77622f8e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"891ea3c8-6fa0-4fa5-bfec-295b77622f8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 16:33:27 crc kubenswrapper[4730]: I0131 16:33:27.001643 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/891ea3c8-6fa0-4fa5-bfec-295b77622f8e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"891ea3c8-6fa0-4fa5-bfec-295b77622f8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 16:33:27 crc kubenswrapper[4730]: I0131 16:33:27.054242 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 16:33:27 crc kubenswrapper[4730]: I0131 16:33:27.484283 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 16:33:27 crc kubenswrapper[4730]: W0131 16:33:27.568124 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod891ea3c8_6fa0_4fa5_bfec_295b77622f8e.slice/crio-7a094b67c432ca9a59dcff7192454beccca378190ebec3beccc254743cb8214e WatchSource:0}: Error finding container 7a094b67c432ca9a59dcff7192454beccca378190ebec3beccc254743cb8214e: Status 404 returned error can't find the container with id 7a094b67c432ca9a59dcff7192454beccca378190ebec3beccc254743cb8214e Jan 31 16:33:27 crc kubenswrapper[4730]: I0131 16:33:27.574440 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jq8n" event={"ID":"24e875c6-16c4-43f2-8533-7d1af60844fb","Type":"ContainerStarted","Data":"5d8551340c944b4b3c72f8ecee713510a5a5c925dd652f49bf591e3f09bbfc3e"} Jan 31 16:33:28 crc kubenswrapper[4730]: I0131 16:33:28.584881 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"891ea3c8-6fa0-4fa5-bfec-295b77622f8e","Type":"ContainerStarted","Data":"7a094b67c432ca9a59dcff7192454beccca378190ebec3beccc254743cb8214e"} Jan 31 16:33:28 crc kubenswrapper[4730]: I0131 16:33:28.605246 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7jq8n" podStartSLOduration=5.491299373 podStartE2EDuration="45.605231916s" podCreationTimestamp="2026-01-31 16:32:43 +0000 UTC" firstStartedPulling="2026-01-31 16:32:46.675980019 +0000 UTC m=+153.482036935" lastFinishedPulling="2026-01-31 16:33:26.789912552 +0000 UTC m=+193.595969478" observedRunningTime="2026-01-31 16:33:28.601752523 +0000 UTC m=+195.407809439" watchObservedRunningTime="2026-01-31 16:33:28.605231916 +0000 UTC m=+195.411288832" Jan 31 16:33:29 crc kubenswrapper[4730]: I0131 16:33:29.591058 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgkkn" event={"ID":"61274bcb-156d-4bfd-806e-89500983ef42","Type":"ContainerStarted","Data":"73eac38dabd46e0a3688c263b6939d47227d9cf6f7b43d6bea47c83c2265aa79"} Jan 31 16:33:29 crc kubenswrapper[4730]: I0131 16:33:29.592316 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"891ea3c8-6fa0-4fa5-bfec-295b77622f8e","Type":"ContainerStarted","Data":"e024be31849ee2248217731751ae385f08490700796156c1c4295ec5c6d6e22e"} Jan 31 16:33:30 crc kubenswrapper[4730]: I0131 16:33:30.598789 4730 generic.go:334] "Generic (PLEG): container finished" podID="891ea3c8-6fa0-4fa5-bfec-295b77622f8e" containerID="e024be31849ee2248217731751ae385f08490700796156c1c4295ec5c6d6e22e" exitCode=0 Jan 31 16:33:30 crc kubenswrapper[4730]: I0131 16:33:30.599103 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"891ea3c8-6fa0-4fa5-bfec-295b77622f8e","Type":"ContainerDied","Data":"e024be31849ee2248217731751ae385f08490700796156c1c4295ec5c6d6e22e"} Jan 31 16:33:30 crc kubenswrapper[4730]: I0131 16:33:30.640147 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mgkkn" podStartSLOduration=5.552111959 podStartE2EDuration="44.640126716s" podCreationTimestamp="2026-01-31 16:32:46 +0000 UTC" firstStartedPulling="2026-01-31 16:32:49.131423115 +0000 UTC m=+155.937480031" lastFinishedPulling="2026-01-31 16:33:28.219437872 +0000 UTC m=+195.025494788" observedRunningTime="2026-01-31 16:33:30.635919631 +0000 UTC m=+197.441976547" watchObservedRunningTime="2026-01-31 16:33:30.640126716 +0000 UTC m=+197.446183642" Jan 31 16:33:30 crc kubenswrapper[4730]: I0131 16:33:30.730656 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 16:33:30 crc kubenswrapper[4730]: I0131 16:33:30.731512 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 16:33:30 crc kubenswrapper[4730]: I0131 16:33:30.744375 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 16:33:30 crc kubenswrapper[4730]: I0131 16:33:30.831821 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af34c31a-e26b-45f6-abbc-a1b8eafaf409-kube-api-access\") pod \"installer-9-crc\" (UID: \"af34c31a-e26b-45f6-abbc-a1b8eafaf409\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 16:33:30 crc kubenswrapper[4730]: I0131 16:33:30.831885 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af34c31a-e26b-45f6-abbc-a1b8eafaf409-kubelet-dir\") pod \"installer-9-crc\" (UID: \"af34c31a-e26b-45f6-abbc-a1b8eafaf409\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 16:33:30 crc kubenswrapper[4730]: I0131 16:33:30.831949 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af34c31a-e26b-45f6-abbc-a1b8eafaf409-var-lock\") pod \"installer-9-crc\" (UID: \"af34c31a-e26b-45f6-abbc-a1b8eafaf409\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 16:33:30 crc kubenswrapper[4730]: I0131 16:33:30.933461 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af34c31a-e26b-45f6-abbc-a1b8eafaf409-kubelet-dir\") pod \"installer-9-crc\" (UID: \"af34c31a-e26b-45f6-abbc-a1b8eafaf409\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 16:33:30 crc kubenswrapper[4730]: I0131 16:33:30.933541 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af34c31a-e26b-45f6-abbc-a1b8eafaf409-var-lock\") pod \"installer-9-crc\" (UID: \"af34c31a-e26b-45f6-abbc-a1b8eafaf409\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 16:33:30 crc kubenswrapper[4730]: I0131 16:33:30.933561 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af34c31a-e26b-45f6-abbc-a1b8eafaf409-kube-api-access\") pod \"installer-9-crc\" (UID: \"af34c31a-e26b-45f6-abbc-a1b8eafaf409\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 16:33:30 crc kubenswrapper[4730]: I0131 16:33:30.933828 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af34c31a-e26b-45f6-abbc-a1b8eafaf409-kubelet-dir\") pod \"installer-9-crc\" (UID: \"af34c31a-e26b-45f6-abbc-a1b8eafaf409\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 16:33:30 crc kubenswrapper[4730]: I0131 16:33:30.933858 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af34c31a-e26b-45f6-abbc-a1b8eafaf409-var-lock\") pod \"installer-9-crc\" (UID: \"af34c31a-e26b-45f6-abbc-a1b8eafaf409\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 16:33:30 crc kubenswrapper[4730]: I0131 16:33:30.952172 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af34c31a-e26b-45f6-abbc-a1b8eafaf409-kube-api-access\") pod \"installer-9-crc\" (UID: \"af34c31a-e26b-45f6-abbc-a1b8eafaf409\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 16:33:31 crc kubenswrapper[4730]: I0131 16:33:31.048966 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 16:33:31 crc kubenswrapper[4730]: I0131 16:33:31.607654 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f78ml" event={"ID":"01ab894a-0ddc-46a2-8027-96606aae9396","Type":"ContainerStarted","Data":"153610c5703cf23d40908a3998979215ea0e6975ede6fa177b3559a2a3a4be40"} Jan 31 16:33:31 crc kubenswrapper[4730]: I0131 16:33:31.624339 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f78ml" podStartSLOduration=4.463210758 podStartE2EDuration="45.62432386s" podCreationTimestamp="2026-01-31 16:32:46 +0000 UTC" firstStartedPulling="2026-01-31 16:32:49.072556242 +0000 UTC m=+155.878613158" lastFinishedPulling="2026-01-31 16:33:30.233669344 +0000 UTC m=+197.039726260" observedRunningTime="2026-01-31 16:33:31.621714023 +0000 UTC m=+198.427770939" watchObservedRunningTime="2026-01-31 16:33:31.62432386 +0000 UTC m=+198.430380766" Jan 31 16:33:31 crc kubenswrapper[4730]: I0131 16:33:31.939444 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 16:33:32 crc kubenswrapper[4730]: I0131 16:33:32.047226 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/891ea3c8-6fa0-4fa5-bfec-295b77622f8e-kube-api-access\") pod \"891ea3c8-6fa0-4fa5-bfec-295b77622f8e\" (UID: \"891ea3c8-6fa0-4fa5-bfec-295b77622f8e\") " Jan 31 16:33:32 crc kubenswrapper[4730]: I0131 16:33:32.047285 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/891ea3c8-6fa0-4fa5-bfec-295b77622f8e-kubelet-dir\") pod \"891ea3c8-6fa0-4fa5-bfec-295b77622f8e\" (UID: \"891ea3c8-6fa0-4fa5-bfec-295b77622f8e\") " Jan 31 16:33:32 crc kubenswrapper[4730]: I0131 16:33:32.047676 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/891ea3c8-6fa0-4fa5-bfec-295b77622f8e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "891ea3c8-6fa0-4fa5-bfec-295b77622f8e" (UID: "891ea3c8-6fa0-4fa5-bfec-295b77622f8e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:33:32 crc kubenswrapper[4730]: I0131 16:33:32.055973 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/891ea3c8-6fa0-4fa5-bfec-295b77622f8e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "891ea3c8-6fa0-4fa5-bfec-295b77622f8e" (UID: "891ea3c8-6fa0-4fa5-bfec-295b77622f8e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:33:32 crc kubenswrapper[4730]: I0131 16:33:32.149066 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/891ea3c8-6fa0-4fa5-bfec-295b77622f8e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:32 crc kubenswrapper[4730]: I0131 16:33:32.149098 4730 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/891ea3c8-6fa0-4fa5-bfec-295b77622f8e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:32 crc kubenswrapper[4730]: I0131 16:33:32.617076 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"891ea3c8-6fa0-4fa5-bfec-295b77622f8e","Type":"ContainerDied","Data":"7a094b67c432ca9a59dcff7192454beccca378190ebec3beccc254743cb8214e"} Jan 31 16:33:32 crc kubenswrapper[4730]: I0131 16:33:32.617118 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a094b67c432ca9a59dcff7192454beccca378190ebec3beccc254743cb8214e" Jan 31 16:33:32 crc kubenswrapper[4730]: I0131 16:33:32.617139 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 16:33:34 crc kubenswrapper[4730]: I0131 16:33:34.002734 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7jq8n" Jan 31 16:33:34 crc kubenswrapper[4730]: I0131 16:33:34.002777 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7jq8n" Jan 31 16:33:34 crc kubenswrapper[4730]: I0131 16:33:34.219734 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7jq8n" Jan 31 16:33:34 crc kubenswrapper[4730]: I0131 16:33:34.450730 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 16:33:34 crc kubenswrapper[4730]: I0131 16:33:34.628364 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmmfc" event={"ID":"0b701a69-5acf-4822-a395-e35001c38825","Type":"ContainerStarted","Data":"9f3bf5055a31081a7ead032868fafd3dfdd7db321c01fc083c9ab96b5ed57c02"} Jan 31 16:33:34 crc kubenswrapper[4730]: I0131 16:33:34.630590 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px9cf" event={"ID":"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e","Type":"ContainerStarted","Data":"065ebf77ca316cde12712bb6772e3fe064328bce0b464c315ec9447b07c4fa8e"} Jan 31 16:33:34 crc kubenswrapper[4730]: I0131 16:33:34.631863 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"af34c31a-e26b-45f6-abbc-a1b8eafaf409","Type":"ContainerStarted","Data":"3424410473f881d1921328b5322b286a018be67a646fc2e9a281faa70d2ef3f3"} Jan 31 16:33:34 crc kubenswrapper[4730]: I0131 16:33:34.633733 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwsps" event={"ID":"e8d7fc22-9a5c-4569-821d-c915ab1f5657","Type":"ContainerStarted","Data":"13f50af20d783513e9aa50d53f19b585ce3471d671188eb22b89320bd474d3a2"} Jan 31 16:33:34 crc kubenswrapper[4730]: I0131 16:33:34.651790 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qmmfc" podStartSLOduration=3.648183659 podStartE2EDuration="48.651777439s" podCreationTimestamp="2026-01-31 16:32:46 +0000 UTC" firstStartedPulling="2026-01-31 16:32:49.022281244 +0000 UTC m=+155.828338160" lastFinishedPulling="2026-01-31 16:33:34.025875024 +0000 UTC m=+200.831931940" observedRunningTime="2026-01-31 16:33:34.648995796 +0000 UTC m=+201.455052712" watchObservedRunningTime="2026-01-31 16:33:34.651777439 +0000 UTC m=+201.457834355" Jan 31 16:33:34 crc kubenswrapper[4730]: I0131 16:33:34.679965 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7jq8n" Jan 31 16:33:34 crc kubenswrapper[4730]: I0131 16:33:34.698570 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-px9cf" podStartSLOduration=4.460137525 podStartE2EDuration="51.698550889s" podCreationTimestamp="2026-01-31 16:32:43 +0000 UTC" firstStartedPulling="2026-01-31 16:32:46.775212716 +0000 UTC m=+153.581269632" lastFinishedPulling="2026-01-31 16:33:34.01362608 +0000 UTC m=+200.819682996" observedRunningTime="2026-01-31 16:33:34.678684969 +0000 UTC m=+201.484741885" watchObservedRunningTime="2026-01-31 16:33:34.698550889 +0000 UTC m=+201.504607805" Jan 31 16:33:34 crc kubenswrapper[4730]: I0131 16:33:34.699848 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xwsps" podStartSLOduration=5.694135237 podStartE2EDuration="51.699844298s" podCreationTimestamp="2026-01-31 16:32:43 +0000 UTC" firstStartedPulling="2026-01-31 16:32:46.66459286 +0000 UTC m=+153.470649776" lastFinishedPulling="2026-01-31 16:33:32.670301911 +0000 UTC m=+199.476358837" observedRunningTime="2026-01-31 16:33:34.698274091 +0000 UTC m=+201.504331007" watchObservedRunningTime="2026-01-31 16:33:34.699844298 +0000 UTC m=+201.505901214" Jan 31 16:33:35 crc kubenswrapper[4730]: I0131 16:33:35.640483 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"af34c31a-e26b-45f6-abbc-a1b8eafaf409","Type":"ContainerStarted","Data":"ecd93892bb3dd1bbdfc7a7d219244cfb7d3a21cd43e33f6cb4429d4a2e74e444"} Jan 31 16:33:35 crc kubenswrapper[4730]: I0131 16:33:35.656722 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.65671012 podStartE2EDuration="5.65671012s" podCreationTimestamp="2026-01-31 16:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:33:35.653908517 +0000 UTC m=+202.459965423" watchObservedRunningTime="2026-01-31 16:33:35.65671012 +0000 UTC m=+202.462767036" Jan 31 16:33:36 crc kubenswrapper[4730]: I0131 16:33:36.202331 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7c9rs" Jan 31 16:33:36 crc kubenswrapper[4730]: I0131 16:33:36.202376 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7c9rs" Jan 31 16:33:36 crc kubenswrapper[4730]: I0131 16:33:36.248859 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7c9rs" Jan 31 16:33:36 crc kubenswrapper[4730]: I0131 16:33:36.565593 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mgkkn" Jan 31 16:33:36 crc kubenswrapper[4730]: I0131 16:33:36.565857 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mgkkn" Jan 31 16:33:36 crc kubenswrapper[4730]: I0131 16:33:36.600608 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mgkkn" Jan 31 16:33:36 crc kubenswrapper[4730]: I0131 16:33:36.683317 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mgkkn" Jan 31 16:33:36 crc kubenswrapper[4730]: I0131 16:33:36.684118 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7c9rs" Jan 31 16:33:36 crc kubenswrapper[4730]: I0131 16:33:36.915475 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f78ml" Jan 31 16:33:36 crc kubenswrapper[4730]: I0131 16:33:36.915519 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f78ml" Jan 31 16:33:36 crc kubenswrapper[4730]: I0131 16:33:36.967988 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f78ml" Jan 31 16:33:37 crc kubenswrapper[4730]: I0131 16:33:37.130084 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qmmfc" Jan 31 16:33:37 crc kubenswrapper[4730]: I0131 16:33:37.130217 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qmmfc" Jan 31 16:33:37 crc kubenswrapper[4730]: I0131 16:33:37.680419 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f78ml" Jan 31 16:33:38 crc kubenswrapper[4730]: I0131 16:33:38.170409 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qmmfc" podUID="0b701a69-5acf-4822-a395-e35001c38825" containerName="registry-server" probeResult="failure" output=< Jan 31 16:33:38 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Jan 31 16:33:38 crc kubenswrapper[4730]: > Jan 31 16:33:38 crc kubenswrapper[4730]: I0131 16:33:38.953001 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgkkn"] Jan 31 16:33:38 crc kubenswrapper[4730]: I0131 16:33:38.953246 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mgkkn" podUID="61274bcb-156d-4bfd-806e-89500983ef42" containerName="registry-server" containerID="cri-o://73eac38dabd46e0a3688c263b6939d47227d9cf6f7b43d6bea47c83c2265aa79" gracePeriod=2 Jan 31 16:33:39 crc kubenswrapper[4730]: I0131 16:33:39.679770 4730 generic.go:334] "Generic (PLEG): container finished" podID="61274bcb-156d-4bfd-806e-89500983ef42" containerID="73eac38dabd46e0a3688c263b6939d47227d9cf6f7b43d6bea47c83c2265aa79" exitCode=0 Jan 31 16:33:39 crc kubenswrapper[4730]: I0131 16:33:39.679857 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgkkn" event={"ID":"61274bcb-156d-4bfd-806e-89500983ef42","Type":"ContainerDied","Data":"73eac38dabd46e0a3688c263b6939d47227d9cf6f7b43d6bea47c83c2265aa79"} Jan 31 16:33:40 crc kubenswrapper[4730]: I0131 16:33:40.031466 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgkkn" Jan 31 16:33:40 crc kubenswrapper[4730]: I0131 16:33:40.162691 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61274bcb-156d-4bfd-806e-89500983ef42-utilities\") pod \"61274bcb-156d-4bfd-806e-89500983ef42\" (UID: \"61274bcb-156d-4bfd-806e-89500983ef42\") " Jan 31 16:33:40 crc kubenswrapper[4730]: I0131 16:33:40.162757 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61274bcb-156d-4bfd-806e-89500983ef42-catalog-content\") pod \"61274bcb-156d-4bfd-806e-89500983ef42\" (UID: \"61274bcb-156d-4bfd-806e-89500983ef42\") " Jan 31 16:33:40 crc kubenswrapper[4730]: I0131 16:33:40.162802 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zwjs\" (UniqueName: \"kubernetes.io/projected/61274bcb-156d-4bfd-806e-89500983ef42-kube-api-access-2zwjs\") pod \"61274bcb-156d-4bfd-806e-89500983ef42\" (UID: \"61274bcb-156d-4bfd-806e-89500983ef42\") " Jan 31 16:33:40 crc kubenswrapper[4730]: I0131 16:33:40.163505 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61274bcb-156d-4bfd-806e-89500983ef42-utilities" (OuterVolumeSpecName: "utilities") pod "61274bcb-156d-4bfd-806e-89500983ef42" (UID: "61274bcb-156d-4bfd-806e-89500983ef42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:33:40 crc kubenswrapper[4730]: I0131 16:33:40.177135 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61274bcb-156d-4bfd-806e-89500983ef42-kube-api-access-2zwjs" (OuterVolumeSpecName: "kube-api-access-2zwjs") pod "61274bcb-156d-4bfd-806e-89500983ef42" (UID: "61274bcb-156d-4bfd-806e-89500983ef42"). InnerVolumeSpecName "kube-api-access-2zwjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:33:40 crc kubenswrapper[4730]: I0131 16:33:40.185457 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61274bcb-156d-4bfd-806e-89500983ef42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61274bcb-156d-4bfd-806e-89500983ef42" (UID: "61274bcb-156d-4bfd-806e-89500983ef42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:33:40 crc kubenswrapper[4730]: I0131 16:33:40.264571 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61274bcb-156d-4bfd-806e-89500983ef42-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:40 crc kubenswrapper[4730]: I0131 16:33:40.264606 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61274bcb-156d-4bfd-806e-89500983ef42-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:40 crc kubenswrapper[4730]: I0131 16:33:40.264618 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zwjs\" (UniqueName: \"kubernetes.io/projected/61274bcb-156d-4bfd-806e-89500983ef42-kube-api-access-2zwjs\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:40 crc kubenswrapper[4730]: I0131 16:33:40.693985 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgkkn" event={"ID":"61274bcb-156d-4bfd-806e-89500983ef42","Type":"ContainerDied","Data":"72605fab90cebc68424d4ab0945d7a4bff5cb2ece43d98361867af95fcfc79e6"} Jan 31 16:33:40 crc kubenswrapper[4730]: I0131 16:33:40.694034 4730 scope.go:117] "RemoveContainer" containerID="73eac38dabd46e0a3688c263b6939d47227d9cf6f7b43d6bea47c83c2265aa79" Jan 31 16:33:40 crc kubenswrapper[4730]: I0131 16:33:40.694167 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgkkn" Jan 31 16:33:40 crc kubenswrapper[4730]: I0131 16:33:40.715316 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgkkn"] Jan 31 16:33:40 crc kubenswrapper[4730]: I0131 16:33:40.718045 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgkkn"] Jan 31 16:33:40 crc kubenswrapper[4730]: I0131 16:33:40.947892 4730 scope.go:117] "RemoveContainer" containerID="355ce229d1360b1e932b41baa35123cbc318d0f42792243c2238deb32a82fc70" Jan 31 16:33:41 crc kubenswrapper[4730]: I0131 16:33:41.284179 4730 scope.go:117] "RemoveContainer" containerID="a1fffc603cc97c82ce4221d42fcb5f6604c3caf80e515868d243d20e02b88d14" Jan 31 16:33:42 crc kubenswrapper[4730]: I0131 16:33:42.477056 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61274bcb-156d-4bfd-806e-89500983ef42" path="/var/lib/kubelet/pods/61274bcb-156d-4bfd-806e-89500983ef42/volumes" Jan 31 16:33:42 crc kubenswrapper[4730]: I0131 16:33:42.706365 4730 generic.go:334] "Generic (PLEG): container finished" podID="f77d01ac-b8b8-436b-9626-6230af5c95b7" containerID="7cf1e13a9e9d3870a63e3baeab687f7667e35f038d2e1931883efb31fc98c1c4" exitCode=0 Jan 31 16:33:42 crc kubenswrapper[4730]: I0131 16:33:42.706405 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnkqr" event={"ID":"f77d01ac-b8b8-436b-9626-6230af5c95b7","Type":"ContainerDied","Data":"7cf1e13a9e9d3870a63e3baeab687f7667e35f038d2e1931883efb31fc98c1c4"} Jan 31 16:33:43 crc kubenswrapper[4730]: I0131 16:33:43.720935 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnkqr" event={"ID":"f77d01ac-b8b8-436b-9626-6230af5c95b7","Type":"ContainerStarted","Data":"28ce7a8ffad5c39f62becde4ad0409ccf7bdb3dc5512ddb3319edca4e889a3bc"} Jan 31 16:33:43 crc kubenswrapper[4730]: I0131 16:33:43.742613 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vnkqr" podStartSLOduration=4.302969769 podStartE2EDuration="1m0.742594036s" podCreationTimestamp="2026-01-31 16:32:43 +0000 UTC" firstStartedPulling="2026-01-31 16:32:46.719516896 +0000 UTC m=+153.525573812" lastFinishedPulling="2026-01-31 16:33:43.159141163 +0000 UTC m=+209.965198079" observedRunningTime="2026-01-31 16:33:43.74003066 +0000 UTC m=+210.546087586" watchObservedRunningTime="2026-01-31 16:33:43.742594036 +0000 UTC m=+210.548650952" Jan 31 16:33:43 crc kubenswrapper[4730]: I0131 16:33:43.971586 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xwsps" Jan 31 16:33:43 crc kubenswrapper[4730]: I0131 16:33:43.971628 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xwsps" Jan 31 16:33:44 crc kubenswrapper[4730]: I0131 16:33:44.009070 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xwsps" Jan 31 16:33:44 crc kubenswrapper[4730]: I0131 16:33:44.159523 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vnkqr" Jan 31 16:33:44 crc kubenswrapper[4730]: I0131 16:33:44.159565 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vnkqr" Jan 31 16:33:44 crc kubenswrapper[4730]: I0131 16:33:44.470939 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-px9cf" Jan 31 16:33:44 crc kubenswrapper[4730]: I0131 16:33:44.471477 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-px9cf" Jan 31 16:33:44 crc kubenswrapper[4730]: I0131 16:33:44.515717 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-px9cf" Jan 31 16:33:44 crc kubenswrapper[4730]: I0131 16:33:44.768850 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xwsps" Jan 31 16:33:44 crc kubenswrapper[4730]: I0131 16:33:44.770378 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-px9cf" Jan 31 16:33:45 crc kubenswrapper[4730]: I0131 16:33:45.193886 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vnkqr" podUID="f77d01ac-b8b8-436b-9626-6230af5c95b7" containerName="registry-server" probeResult="failure" output=< Jan 31 16:33:45 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Jan 31 16:33:45 crc kubenswrapper[4730]: > Jan 31 16:33:46 crc kubenswrapper[4730]: I0131 16:33:46.354615 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-px9cf"] Jan 31 16:33:47 crc kubenswrapper[4730]: I0131 16:33:47.198055 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qmmfc" Jan 31 16:33:47 crc kubenswrapper[4730]: I0131 16:33:47.253076 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qmmfc" Jan 31 16:33:47 crc kubenswrapper[4730]: I0131 16:33:47.744342 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-px9cf" podUID="d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e" containerName="registry-server" containerID="cri-o://065ebf77ca316cde12712bb6772e3fe064328bce0b464c315ec9447b07c4fa8e" gracePeriod=2 Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.716263 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-px9cf" Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.755915 4730 generic.go:334] "Generic (PLEG): container finished" podID="d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e" containerID="065ebf77ca316cde12712bb6772e3fe064328bce0b464c315ec9447b07c4fa8e" exitCode=0 Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.755956 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px9cf" event={"ID":"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e","Type":"ContainerDied","Data":"065ebf77ca316cde12712bb6772e3fe064328bce0b464c315ec9447b07c4fa8e"} Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.755984 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-px9cf" event={"ID":"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e","Type":"ContainerDied","Data":"679ea35fb3e7021eb136e79169a000925a0d66b50d75d8213ebb6baf515a9149"} Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.756000 4730 scope.go:117] "RemoveContainer" containerID="065ebf77ca316cde12712bb6772e3fe064328bce0b464c315ec9447b07c4fa8e" Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.758412 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-px9cf" Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.761901 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-utilities\") pod \"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e\" (UID: \"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e\") " Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.762056 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-catalog-content\") pod \"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e\" (UID: \"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e\") " Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.762136 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj49s\" (UniqueName: \"kubernetes.io/projected/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-kube-api-access-nj49s\") pod \"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e\" (UID: \"d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e\") " Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.765009 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-utilities" (OuterVolumeSpecName: "utilities") pod "d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e" (UID: "d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.772140 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-kube-api-access-nj49s" (OuterVolumeSpecName: "kube-api-access-nj49s") pod "d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e" (UID: "d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e"). InnerVolumeSpecName "kube-api-access-nj49s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.776466 4730 scope.go:117] "RemoveContainer" containerID="2dc2ca946e60aa19a45af9a7066555c68592eaf7b9ea1d7eca63f9c84a1023df" Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.796112 4730 scope.go:117] "RemoveContainer" containerID="a80c35e32b6fc928276f7e4c629ad94f6f76169d07782f63fac86259340f55f6" Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.821311 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e" (UID: "d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.821451 4730 scope.go:117] "RemoveContainer" containerID="065ebf77ca316cde12712bb6772e3fe064328bce0b464c315ec9447b07c4fa8e" Jan 31 16:33:48 crc kubenswrapper[4730]: E0131 16:33:48.822020 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"065ebf77ca316cde12712bb6772e3fe064328bce0b464c315ec9447b07c4fa8e\": container with ID starting with 065ebf77ca316cde12712bb6772e3fe064328bce0b464c315ec9447b07c4fa8e not found: ID does not exist" containerID="065ebf77ca316cde12712bb6772e3fe064328bce0b464c315ec9447b07c4fa8e" Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.822053 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065ebf77ca316cde12712bb6772e3fe064328bce0b464c315ec9447b07c4fa8e"} err="failed to get container status \"065ebf77ca316cde12712bb6772e3fe064328bce0b464c315ec9447b07c4fa8e\": rpc error: code = NotFound desc = could not find container \"065ebf77ca316cde12712bb6772e3fe064328bce0b464c315ec9447b07c4fa8e\": container with ID starting with 065ebf77ca316cde12712bb6772e3fe064328bce0b464c315ec9447b07c4fa8e not found: ID does not exist" Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.822103 4730 scope.go:117] "RemoveContainer" containerID="2dc2ca946e60aa19a45af9a7066555c68592eaf7b9ea1d7eca63f9c84a1023df" Jan 31 16:33:48 crc kubenswrapper[4730]: E0131 16:33:48.823084 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dc2ca946e60aa19a45af9a7066555c68592eaf7b9ea1d7eca63f9c84a1023df\": container with ID starting with 2dc2ca946e60aa19a45af9a7066555c68592eaf7b9ea1d7eca63f9c84a1023df not found: ID does not exist" containerID="2dc2ca946e60aa19a45af9a7066555c68592eaf7b9ea1d7eca63f9c84a1023df" Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.823139 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dc2ca946e60aa19a45af9a7066555c68592eaf7b9ea1d7eca63f9c84a1023df"} err="failed to get container status \"2dc2ca946e60aa19a45af9a7066555c68592eaf7b9ea1d7eca63f9c84a1023df\": rpc error: code = NotFound desc = could not find container \"2dc2ca946e60aa19a45af9a7066555c68592eaf7b9ea1d7eca63f9c84a1023df\": container with ID starting with 2dc2ca946e60aa19a45af9a7066555c68592eaf7b9ea1d7eca63f9c84a1023df not found: ID does not exist" Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.823156 4730 scope.go:117] "RemoveContainer" containerID="a80c35e32b6fc928276f7e4c629ad94f6f76169d07782f63fac86259340f55f6" Jan 31 16:33:48 crc kubenswrapper[4730]: E0131 16:33:48.823477 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a80c35e32b6fc928276f7e4c629ad94f6f76169d07782f63fac86259340f55f6\": container with ID starting with a80c35e32b6fc928276f7e4c629ad94f6f76169d07782f63fac86259340f55f6 not found: ID does not exist" containerID="a80c35e32b6fc928276f7e4c629ad94f6f76169d07782f63fac86259340f55f6" Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.823499 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a80c35e32b6fc928276f7e4c629ad94f6f76169d07782f63fac86259340f55f6"} err="failed to get container status \"a80c35e32b6fc928276f7e4c629ad94f6f76169d07782f63fac86259340f55f6\": rpc error: code = NotFound desc = could not find container \"a80c35e32b6fc928276f7e4c629ad94f6f76169d07782f63fac86259340f55f6\": container with ID starting with a80c35e32b6fc928276f7e4c629ad94f6f76169d07782f63fac86259340f55f6 not found: ID does not exist" Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.864240 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.864268 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj49s\" (UniqueName: \"kubernetes.io/projected/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-kube-api-access-nj49s\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:48 crc kubenswrapper[4730]: I0131 16:33:48.864278 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:49 crc kubenswrapper[4730]: I0131 16:33:49.102393 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-px9cf"] Jan 31 16:33:49 crc kubenswrapper[4730]: I0131 16:33:49.106596 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-px9cf"] Jan 31 16:33:50 crc kubenswrapper[4730]: I0131 16:33:50.473866 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e" path="/var/lib/kubelet/pods/d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e/volumes" Jan 31 16:33:50 crc kubenswrapper[4730]: I0131 16:33:50.744917 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" podUID="f3e4348b-10b3-482a-a64d-4c2bfe52fb69" containerName="oauth-openshift" containerID="cri-o://058560941bb3a4738c3a9fd3545d91222cf1f162b3cc11d63f1f9758230534ed" gracePeriod=15 Jan 31 16:33:50 crc kubenswrapper[4730]: I0131 16:33:50.763037 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qmmfc"] Jan 31 16:33:50 crc kubenswrapper[4730]: I0131 16:33:50.763463 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qmmfc" podUID="0b701a69-5acf-4822-a395-e35001c38825" containerName="registry-server" containerID="cri-o://9f3bf5055a31081a7ead032868fafd3dfdd7db321c01fc083c9ab96b5ed57c02" gracePeriod=2 Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.200955 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.208118 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmmfc" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.294190 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-idp-0-file-data\") pod \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.294507 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-cliconfig\") pod \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.294539 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-trusted-ca-bundle\") pod \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.294561 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-serving-cert\") pod \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.294596 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-router-certs\") pod \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.294619 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-ocp-branding-template\") pod \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.294680 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-provider-selection\") pod \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.295388 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-login\") pod \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.295421 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24t4s\" (UniqueName: \"kubernetes.io/projected/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-kube-api-access-24t4s\") pod \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.295441 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7q78\" (UniqueName: \"kubernetes.io/projected/0b701a69-5acf-4822-a395-e35001c38825-kube-api-access-r7q78\") pod \"0b701a69-5acf-4822-a395-e35001c38825\" (UID: \"0b701a69-5acf-4822-a395-e35001c38825\") " Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.295474 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b701a69-5acf-4822-a395-e35001c38825-catalog-content\") pod \"0b701a69-5acf-4822-a395-e35001c38825\" (UID: \"0b701a69-5acf-4822-a395-e35001c38825\") " Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.295500 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-error\") pod \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.295522 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b701a69-5acf-4822-a395-e35001c38825-utilities\") pod \"0b701a69-5acf-4822-a395-e35001c38825\" (UID: \"0b701a69-5acf-4822-a395-e35001c38825\") " Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.295544 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-audit-dir\") pod \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.295564 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-audit-policies\") pod \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.295616 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-session\") pod \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.295636 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-service-ca\") pod \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\" (UID: \"f3e4348b-10b3-482a-a64d-4c2bfe52fb69\") " Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.295529 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f3e4348b-10b3-482a-a64d-4c2bfe52fb69" (UID: "f3e4348b-10b3-482a-a64d-4c2bfe52fb69"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.295657 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f3e4348b-10b3-482a-a64d-4c2bfe52fb69" (UID: "f3e4348b-10b3-482a-a64d-4c2bfe52fb69"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.296359 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f3e4348b-10b3-482a-a64d-4c2bfe52fb69" (UID: "f3e4348b-10b3-482a-a64d-4c2bfe52fb69"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.296922 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f3e4348b-10b3-482a-a64d-4c2bfe52fb69" (UID: "f3e4348b-10b3-482a-a64d-4c2bfe52fb69"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.297211 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b701a69-5acf-4822-a395-e35001c38825-utilities" (OuterVolumeSpecName: "utilities") pod "0b701a69-5acf-4822-a395-e35001c38825" (UID: "0b701a69-5acf-4822-a395-e35001c38825"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.297334 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f3e4348b-10b3-482a-a64d-4c2bfe52fb69" (UID: "f3e4348b-10b3-482a-a64d-4c2bfe52fb69"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.298556 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f3e4348b-10b3-482a-a64d-4c2bfe52fb69" (UID: "f3e4348b-10b3-482a-a64d-4c2bfe52fb69"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.298820 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f3e4348b-10b3-482a-a64d-4c2bfe52fb69" (UID: "f3e4348b-10b3-482a-a64d-4c2bfe52fb69"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.298956 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f3e4348b-10b3-482a-a64d-4c2bfe52fb69" (UID: "f3e4348b-10b3-482a-a64d-4c2bfe52fb69"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.300034 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-kube-api-access-24t4s" (OuterVolumeSpecName: "kube-api-access-24t4s") pod "f3e4348b-10b3-482a-a64d-4c2bfe52fb69" (UID: "f3e4348b-10b3-482a-a64d-4c2bfe52fb69"). InnerVolumeSpecName "kube-api-access-24t4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.300931 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b701a69-5acf-4822-a395-e35001c38825-kube-api-access-r7q78" (OuterVolumeSpecName: "kube-api-access-r7q78") pod "0b701a69-5acf-4822-a395-e35001c38825" (UID: "0b701a69-5acf-4822-a395-e35001c38825"). InnerVolumeSpecName "kube-api-access-r7q78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.300966 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f3e4348b-10b3-482a-a64d-4c2bfe52fb69" (UID: "f3e4348b-10b3-482a-a64d-4c2bfe52fb69"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.301862 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f3e4348b-10b3-482a-a64d-4c2bfe52fb69" (UID: "f3e4348b-10b3-482a-a64d-4c2bfe52fb69"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.302287 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f3e4348b-10b3-482a-a64d-4c2bfe52fb69" (UID: "f3e4348b-10b3-482a-a64d-4c2bfe52fb69"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.304520 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f3e4348b-10b3-482a-a64d-4c2bfe52fb69" (UID: "f3e4348b-10b3-482a-a64d-4c2bfe52fb69"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.306975 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f3e4348b-10b3-482a-a64d-4c2bfe52fb69" (UID: "f3e4348b-10b3-482a-a64d-4c2bfe52fb69"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.397508 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.397550 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.397565 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.397580 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.397591 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.397606 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.397618 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.397631 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.397642 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24t4s\" (UniqueName: \"kubernetes.io/projected/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-kube-api-access-24t4s\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.397654 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7q78\" (UniqueName: \"kubernetes.io/projected/0b701a69-5acf-4822-a395-e35001c38825-kube-api-access-r7q78\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.397664 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.397675 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b701a69-5acf-4822-a395-e35001c38825-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.397686 4730 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.397696 4730 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.397710 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.397721 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3e4348b-10b3-482a-a64d-4c2bfe52fb69-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.437795 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b701a69-5acf-4822-a395-e35001c38825-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b701a69-5acf-4822-a395-e35001c38825" (UID: "0b701a69-5acf-4822-a395-e35001c38825"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.499026 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b701a69-5acf-4822-a395-e35001c38825-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.778767 4730 generic.go:334] "Generic (PLEG): container finished" podID="f3e4348b-10b3-482a-a64d-4c2bfe52fb69" containerID="058560941bb3a4738c3a9fd3545d91222cf1f162b3cc11d63f1f9758230534ed" exitCode=0 Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.778842 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.778855 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" event={"ID":"f3e4348b-10b3-482a-a64d-4c2bfe52fb69","Type":"ContainerDied","Data":"058560941bb3a4738c3a9fd3545d91222cf1f162b3cc11d63f1f9758230534ed"} Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.779147 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5kjkn" event={"ID":"f3e4348b-10b3-482a-a64d-4c2bfe52fb69","Type":"ContainerDied","Data":"028623e425929302f815c3bfca034607c0890b76a221aea0f3052f131b64fc37"} Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.779215 4730 scope.go:117] "RemoveContainer" containerID="058560941bb3a4738c3a9fd3545d91222cf1f162b3cc11d63f1f9758230534ed" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.782513 4730 generic.go:334] "Generic (PLEG): container finished" podID="0b701a69-5acf-4822-a395-e35001c38825" containerID="9f3bf5055a31081a7ead032868fafd3dfdd7db321c01fc083c9ab96b5ed57c02" exitCode=0 Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.782542 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmmfc" event={"ID":"0b701a69-5acf-4822-a395-e35001c38825","Type":"ContainerDied","Data":"9f3bf5055a31081a7ead032868fafd3dfdd7db321c01fc083c9ab96b5ed57c02"} Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.782562 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmmfc" event={"ID":"0b701a69-5acf-4822-a395-e35001c38825","Type":"ContainerDied","Data":"2b0c6bd2d86f4b9d584ea79f218d4f62973a5d067ac732dda1a0e91aa5407d4e"} Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.782641 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmmfc" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.802697 4730 scope.go:117] "RemoveContainer" containerID="058560941bb3a4738c3a9fd3545d91222cf1f162b3cc11d63f1f9758230534ed" Jan 31 16:33:51 crc kubenswrapper[4730]: E0131 16:33:51.803709 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"058560941bb3a4738c3a9fd3545d91222cf1f162b3cc11d63f1f9758230534ed\": container with ID starting with 058560941bb3a4738c3a9fd3545d91222cf1f162b3cc11d63f1f9758230534ed not found: ID does not exist" containerID="058560941bb3a4738c3a9fd3545d91222cf1f162b3cc11d63f1f9758230534ed" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.803762 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"058560941bb3a4738c3a9fd3545d91222cf1f162b3cc11d63f1f9758230534ed"} err="failed to get container status \"058560941bb3a4738c3a9fd3545d91222cf1f162b3cc11d63f1f9758230534ed\": rpc error: code = NotFound desc = could not find container \"058560941bb3a4738c3a9fd3545d91222cf1f162b3cc11d63f1f9758230534ed\": container with ID starting with 058560941bb3a4738c3a9fd3545d91222cf1f162b3cc11d63f1f9758230534ed not found: ID does not exist" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.803800 4730 scope.go:117] "RemoveContainer" containerID="9f3bf5055a31081a7ead032868fafd3dfdd7db321c01fc083c9ab96b5ed57c02" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.818618 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qmmfc"] Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.823714 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qmmfc"] Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.828287 4730 scope.go:117] "RemoveContainer" containerID="ae1a268236c4ede016154d5bd3d4e4f73f549aeb4cd6445181f76123e2d2b184" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.841010 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5kjkn"] Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.852644 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5kjkn"] Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.856409 4730 scope.go:117] "RemoveContainer" containerID="134291934dc6ea76648169a8c39216338c82816189c11eb7687f4d40e2d9df30" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.878017 4730 scope.go:117] "RemoveContainer" containerID="9f3bf5055a31081a7ead032868fafd3dfdd7db321c01fc083c9ab96b5ed57c02" Jan 31 16:33:51 crc kubenswrapper[4730]: E0131 16:33:51.878514 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f3bf5055a31081a7ead032868fafd3dfdd7db321c01fc083c9ab96b5ed57c02\": container with ID starting with 9f3bf5055a31081a7ead032868fafd3dfdd7db321c01fc083c9ab96b5ed57c02 not found: ID does not exist" containerID="9f3bf5055a31081a7ead032868fafd3dfdd7db321c01fc083c9ab96b5ed57c02" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.878555 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f3bf5055a31081a7ead032868fafd3dfdd7db321c01fc083c9ab96b5ed57c02"} err="failed to get container status \"9f3bf5055a31081a7ead032868fafd3dfdd7db321c01fc083c9ab96b5ed57c02\": rpc error: code = NotFound desc = could not find container \"9f3bf5055a31081a7ead032868fafd3dfdd7db321c01fc083c9ab96b5ed57c02\": container with ID starting with 9f3bf5055a31081a7ead032868fafd3dfdd7db321c01fc083c9ab96b5ed57c02 not found: ID does not exist" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.878585 4730 scope.go:117] "RemoveContainer" containerID="ae1a268236c4ede016154d5bd3d4e4f73f549aeb4cd6445181f76123e2d2b184" Jan 31 16:33:51 crc kubenswrapper[4730]: E0131 16:33:51.879108 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae1a268236c4ede016154d5bd3d4e4f73f549aeb4cd6445181f76123e2d2b184\": container with ID starting with ae1a268236c4ede016154d5bd3d4e4f73f549aeb4cd6445181f76123e2d2b184 not found: ID does not exist" containerID="ae1a268236c4ede016154d5bd3d4e4f73f549aeb4cd6445181f76123e2d2b184" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.879156 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae1a268236c4ede016154d5bd3d4e4f73f549aeb4cd6445181f76123e2d2b184"} err="failed to get container status \"ae1a268236c4ede016154d5bd3d4e4f73f549aeb4cd6445181f76123e2d2b184\": rpc error: code = NotFound desc = could not find container \"ae1a268236c4ede016154d5bd3d4e4f73f549aeb4cd6445181f76123e2d2b184\": container with ID starting with ae1a268236c4ede016154d5bd3d4e4f73f549aeb4cd6445181f76123e2d2b184 not found: ID does not exist" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.879190 4730 scope.go:117] "RemoveContainer" containerID="134291934dc6ea76648169a8c39216338c82816189c11eb7687f4d40e2d9df30" Jan 31 16:33:51 crc kubenswrapper[4730]: E0131 16:33:51.879584 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"134291934dc6ea76648169a8c39216338c82816189c11eb7687f4d40e2d9df30\": container with ID starting with 134291934dc6ea76648169a8c39216338c82816189c11eb7687f4d40e2d9df30 not found: ID does not exist" containerID="134291934dc6ea76648169a8c39216338c82816189c11eb7687f4d40e2d9df30" Jan 31 16:33:51 crc kubenswrapper[4730]: I0131 16:33:51.879768 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134291934dc6ea76648169a8c39216338c82816189c11eb7687f4d40e2d9df30"} err="failed to get container status \"134291934dc6ea76648169a8c39216338c82816189c11eb7687f4d40e2d9df30\": rpc error: code = NotFound desc = could not find container \"134291934dc6ea76648169a8c39216338c82816189c11eb7687f4d40e2d9df30\": container with ID starting with 134291934dc6ea76648169a8c39216338c82816189c11eb7687f4d40e2d9df30 not found: ID does not exist" Jan 31 16:33:52 crc kubenswrapper[4730]: I0131 16:33:52.473322 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b701a69-5acf-4822-a395-e35001c38825" path="/var/lib/kubelet/pods/0b701a69-5acf-4822-a395-e35001c38825/volumes" Jan 31 16:33:52 crc kubenswrapper[4730]: I0131 16:33:52.474172 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e4348b-10b3-482a-a64d-4c2bfe52fb69" path="/var/lib/kubelet/pods/f3e4348b-10b3-482a-a64d-4c2bfe52fb69/volumes" Jan 31 16:33:54 crc kubenswrapper[4730]: I0131 16:33:54.194575 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vnkqr" Jan 31 16:33:54 crc kubenswrapper[4730]: I0131 16:33:54.241326 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vnkqr" Jan 31 16:33:55 crc kubenswrapper[4730]: I0131 16:33:55.353287 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vnkqr"] Jan 31 16:33:55 crc kubenswrapper[4730]: I0131 16:33:55.811999 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vnkqr" podUID="f77d01ac-b8b8-436b-9626-6230af5c95b7" containerName="registry-server" containerID="cri-o://28ce7a8ffad5c39f62becde4ad0409ccf7bdb3dc5512ddb3319edca4e889a3bc" gracePeriod=2 Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.230389 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnkqr" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.380328 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5npw6\" (UniqueName: \"kubernetes.io/projected/f77d01ac-b8b8-436b-9626-6230af5c95b7-kube-api-access-5npw6\") pod \"f77d01ac-b8b8-436b-9626-6230af5c95b7\" (UID: \"f77d01ac-b8b8-436b-9626-6230af5c95b7\") " Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.380375 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77d01ac-b8b8-436b-9626-6230af5c95b7-utilities\") pod \"f77d01ac-b8b8-436b-9626-6230af5c95b7\" (UID: \"f77d01ac-b8b8-436b-9626-6230af5c95b7\") " Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.380419 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77d01ac-b8b8-436b-9626-6230af5c95b7-catalog-content\") pod \"f77d01ac-b8b8-436b-9626-6230af5c95b7\" (UID: \"f77d01ac-b8b8-436b-9626-6230af5c95b7\") " Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.381284 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f77d01ac-b8b8-436b-9626-6230af5c95b7-utilities" (OuterVolumeSpecName: "utilities") pod "f77d01ac-b8b8-436b-9626-6230af5c95b7" (UID: "f77d01ac-b8b8-436b-9626-6230af5c95b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.389295 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f77d01ac-b8b8-436b-9626-6230af5c95b7-kube-api-access-5npw6" (OuterVolumeSpecName: "kube-api-access-5npw6") pod "f77d01ac-b8b8-436b-9626-6230af5c95b7" (UID: "f77d01ac-b8b8-436b-9626-6230af5c95b7"). InnerVolumeSpecName "kube-api-access-5npw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.425358 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f77d01ac-b8b8-436b-9626-6230af5c95b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f77d01ac-b8b8-436b-9626-6230af5c95b7" (UID: "f77d01ac-b8b8-436b-9626-6230af5c95b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.481757 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5npw6\" (UniqueName: \"kubernetes.io/projected/f77d01ac-b8b8-436b-9626-6230af5c95b7-kube-api-access-5npw6\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.481780 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77d01ac-b8b8-436b-9626-6230af5c95b7-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.481791 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77d01ac-b8b8-436b-9626-6230af5c95b7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.817639 4730 generic.go:334] "Generic (PLEG): container finished" podID="f77d01ac-b8b8-436b-9626-6230af5c95b7" containerID="28ce7a8ffad5c39f62becde4ad0409ccf7bdb3dc5512ddb3319edca4e889a3bc" exitCode=0 Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.817674 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnkqr" event={"ID":"f77d01ac-b8b8-436b-9626-6230af5c95b7","Type":"ContainerDied","Data":"28ce7a8ffad5c39f62becde4ad0409ccf7bdb3dc5512ddb3319edca4e889a3bc"} Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.817690 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnkqr" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.817698 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnkqr" event={"ID":"f77d01ac-b8b8-436b-9626-6230af5c95b7","Type":"ContainerDied","Data":"6d1a82d7b22a1bd4fbf1dd73b560e91eda767ee9595f038350dfb820b02f658b"} Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.817714 4730 scope.go:117] "RemoveContainer" containerID="28ce7a8ffad5c39f62becde4ad0409ccf7bdb3dc5512ddb3319edca4e889a3bc" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.831570 4730 scope.go:117] "RemoveContainer" containerID="7cf1e13a9e9d3870a63e3baeab687f7667e35f038d2e1931883efb31fc98c1c4" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.831743 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vnkqr"] Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.836166 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vnkqr"] Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.852594 4730 scope.go:117] "RemoveContainer" containerID="13895c86ab26b246ebac13097f6d4cc6497130443460b48b7466cb89d95ba8ff" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.864139 4730 scope.go:117] "RemoveContainer" containerID="28ce7a8ffad5c39f62becde4ad0409ccf7bdb3dc5512ddb3319edca4e889a3bc" Jan 31 16:33:56 crc kubenswrapper[4730]: E0131 16:33:56.864435 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28ce7a8ffad5c39f62becde4ad0409ccf7bdb3dc5512ddb3319edca4e889a3bc\": container with ID starting with 28ce7a8ffad5c39f62becde4ad0409ccf7bdb3dc5512ddb3319edca4e889a3bc not found: ID does not exist" containerID="28ce7a8ffad5c39f62becde4ad0409ccf7bdb3dc5512ddb3319edca4e889a3bc" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.864482 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ce7a8ffad5c39f62becde4ad0409ccf7bdb3dc5512ddb3319edca4e889a3bc"} err="failed to get container status \"28ce7a8ffad5c39f62becde4ad0409ccf7bdb3dc5512ddb3319edca4e889a3bc\": rpc error: code = NotFound desc = could not find container \"28ce7a8ffad5c39f62becde4ad0409ccf7bdb3dc5512ddb3319edca4e889a3bc\": container with ID starting with 28ce7a8ffad5c39f62becde4ad0409ccf7bdb3dc5512ddb3319edca4e889a3bc not found: ID does not exist" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.864508 4730 scope.go:117] "RemoveContainer" containerID="7cf1e13a9e9d3870a63e3baeab687f7667e35f038d2e1931883efb31fc98c1c4" Jan 31 16:33:56 crc kubenswrapper[4730]: E0131 16:33:56.864787 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cf1e13a9e9d3870a63e3baeab687f7667e35f038d2e1931883efb31fc98c1c4\": container with ID starting with 7cf1e13a9e9d3870a63e3baeab687f7667e35f038d2e1931883efb31fc98c1c4 not found: ID does not exist" containerID="7cf1e13a9e9d3870a63e3baeab687f7667e35f038d2e1931883efb31fc98c1c4" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.864832 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cf1e13a9e9d3870a63e3baeab687f7667e35f038d2e1931883efb31fc98c1c4"} err="failed to get container status \"7cf1e13a9e9d3870a63e3baeab687f7667e35f038d2e1931883efb31fc98c1c4\": rpc error: code = NotFound desc = could not find container \"7cf1e13a9e9d3870a63e3baeab687f7667e35f038d2e1931883efb31fc98c1c4\": container with ID starting with 7cf1e13a9e9d3870a63e3baeab687f7667e35f038d2e1931883efb31fc98c1c4 not found: ID does not exist" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.864847 4730 scope.go:117] "RemoveContainer" containerID="13895c86ab26b246ebac13097f6d4cc6497130443460b48b7466cb89d95ba8ff" Jan 31 16:33:56 crc kubenswrapper[4730]: E0131 16:33:56.865523 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13895c86ab26b246ebac13097f6d4cc6497130443460b48b7466cb89d95ba8ff\": container with ID starting with 13895c86ab26b246ebac13097f6d4cc6497130443460b48b7466cb89d95ba8ff not found: ID does not exist" containerID="13895c86ab26b246ebac13097f6d4cc6497130443460b48b7466cb89d95ba8ff" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.865563 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13895c86ab26b246ebac13097f6d4cc6497130443460b48b7466cb89d95ba8ff"} err="failed to get container status \"13895c86ab26b246ebac13097f6d4cc6497130443460b48b7466cb89d95ba8ff\": rpc error: code = NotFound desc = could not find container \"13895c86ab26b246ebac13097f6d4cc6497130443460b48b7466cb89d95ba8ff\": container with ID starting with 13895c86ab26b246ebac13097f6d4cc6497130443460b48b7466cb89d95ba8ff not found: ID does not exist" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.975414 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.975475 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.975516 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.976365 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c"} pod="openshift-machine-config-operator/machine-config-daemon-mzg47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 16:33:56 crc kubenswrapper[4730]: I0131 16:33:56.976419 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" containerID="cri-o://50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c" gracePeriod=600 Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.717993 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7765894ccc-qjhfm"] Jan 31 16:33:57 crc kubenswrapper[4730]: E0131 16:33:57.718601 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e" containerName="extract-content" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.718617 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e" containerName="extract-content" Jan 31 16:33:57 crc kubenswrapper[4730]: E0131 16:33:57.718628 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e" containerName="registry-server" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.718636 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e" containerName="registry-server" Jan 31 16:33:57 crc kubenswrapper[4730]: E0131 16:33:57.718648 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891ea3c8-6fa0-4fa5-bfec-295b77622f8e" containerName="pruner" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.718656 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="891ea3c8-6fa0-4fa5-bfec-295b77622f8e" containerName="pruner" Jan 31 16:33:57 crc kubenswrapper[4730]: E0131 16:33:57.718672 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b701a69-5acf-4822-a395-e35001c38825" containerName="extract-utilities" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.718680 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b701a69-5acf-4822-a395-e35001c38825" containerName="extract-utilities" Jan 31 16:33:57 crc kubenswrapper[4730]: E0131 16:33:57.718689 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e4348b-10b3-482a-a64d-4c2bfe52fb69" containerName="oauth-openshift" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.718696 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e4348b-10b3-482a-a64d-4c2bfe52fb69" containerName="oauth-openshift" Jan 31 16:33:57 crc kubenswrapper[4730]: E0131 16:33:57.718706 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77d01ac-b8b8-436b-9626-6230af5c95b7" containerName="extract-content" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.718716 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77d01ac-b8b8-436b-9626-6230af5c95b7" containerName="extract-content" Jan 31 16:33:57 crc kubenswrapper[4730]: E0131 16:33:57.718728 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b701a69-5acf-4822-a395-e35001c38825" containerName="extract-content" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.718735 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b701a69-5acf-4822-a395-e35001c38825" containerName="extract-content" Jan 31 16:33:57 crc kubenswrapper[4730]: E0131 16:33:57.718747 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77d01ac-b8b8-436b-9626-6230af5c95b7" containerName="registry-server" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.718754 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77d01ac-b8b8-436b-9626-6230af5c95b7" containerName="registry-server" Jan 31 16:33:57 crc kubenswrapper[4730]: E0131 16:33:57.718766 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e" containerName="extract-utilities" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.718773 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e" containerName="extract-utilities" Jan 31 16:33:57 crc kubenswrapper[4730]: E0131 16:33:57.718785 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b701a69-5acf-4822-a395-e35001c38825" containerName="registry-server" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.718792 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b701a69-5acf-4822-a395-e35001c38825" containerName="registry-server" Jan 31 16:33:57 crc kubenswrapper[4730]: E0131 16:33:57.718821 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61274bcb-156d-4bfd-806e-89500983ef42" containerName="extract-content" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.718830 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="61274bcb-156d-4bfd-806e-89500983ef42" containerName="extract-content" Jan 31 16:33:57 crc kubenswrapper[4730]: E0131 16:33:57.718839 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61274bcb-156d-4bfd-806e-89500983ef42" containerName="extract-utilities" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.718847 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="61274bcb-156d-4bfd-806e-89500983ef42" containerName="extract-utilities" Jan 31 16:33:57 crc kubenswrapper[4730]: E0131 16:33:57.718859 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77d01ac-b8b8-436b-9626-6230af5c95b7" containerName="extract-utilities" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.718866 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77d01ac-b8b8-436b-9626-6230af5c95b7" containerName="extract-utilities" Jan 31 16:33:57 crc kubenswrapper[4730]: E0131 16:33:57.718876 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61274bcb-156d-4bfd-806e-89500983ef42" containerName="registry-server" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.718883 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="61274bcb-156d-4bfd-806e-89500983ef42" containerName="registry-server" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.718984 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="61274bcb-156d-4bfd-806e-89500983ef42" containerName="registry-server" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.718997 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d5cfe7-dcf8-42e1-af91-a9fb1209c93e" containerName="registry-server" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.719007 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f77d01ac-b8b8-436b-9626-6230af5c95b7" containerName="registry-server" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.719018 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="891ea3c8-6fa0-4fa5-bfec-295b77622f8e" containerName="pruner" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.719028 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e4348b-10b3-482a-a64d-4c2bfe52fb69" containerName="oauth-openshift" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.719040 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b701a69-5acf-4822-a395-e35001c38825" containerName="registry-server" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.719427 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.722108 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.722897 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.723003 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.723317 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.724684 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.724985 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.725339 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.725362 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.725596 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.725684 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.725716 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.727691 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.735106 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.740864 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.742171 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7765894ccc-qjhfm"] Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.750794 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.799505 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.799563 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.799599 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b0b2ddeb-92b9-433f-a71c-c8c113db2805-audit-policies\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.799623 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drhqw\" (UniqueName: \"kubernetes.io/projected/b0b2ddeb-92b9-433f-a71c-c8c113db2805-kube-api-access-drhqw\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.799647 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-user-template-error\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.799676 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-session\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.799899 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.800021 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-service-ca\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.800058 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.800107 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b0b2ddeb-92b9-433f-a71c-c8c113db2805-audit-dir\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.800148 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-user-template-login\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.800232 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-router-certs\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.800277 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.800382 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.827629 4730 generic.go:334] "Generic (PLEG): container finished" podID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerID="50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c" exitCode=0 Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.827683 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerDied","Data":"50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c"} Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.827722 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerStarted","Data":"9b11c9a3a6b003984d5cc7b0769b316d6026aca4dc2bc56230ee6ace4c824f75"} Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.901344 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-router-certs\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.901390 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.901455 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.901505 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.901533 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.902316 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.902323 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b0b2ddeb-92b9-433f-a71c-c8c113db2805-audit-policies\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.902385 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drhqw\" (UniqueName: \"kubernetes.io/projected/b0b2ddeb-92b9-433f-a71c-c8c113db2805-kube-api-access-drhqw\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.902407 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-user-template-error\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.902429 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-session\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.902487 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.902497 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.902570 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-service-ca\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.902595 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.902617 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-user-template-login\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.902652 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b0b2ddeb-92b9-433f-a71c-c8c113db2805-audit-dir\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.902887 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b0b2ddeb-92b9-433f-a71c-c8c113db2805-audit-policies\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.903678 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-service-ca\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.903729 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b0b2ddeb-92b9-433f-a71c-c8c113db2805-audit-dir\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.907140 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.907727 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-router-certs\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.908680 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.909145 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-session\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.909278 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-user-template-error\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.913509 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.917598 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-user-template-login\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.919918 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drhqw\" (UniqueName: \"kubernetes.io/projected/b0b2ddeb-92b9-433f-a71c-c8c113db2805-kube-api-access-drhqw\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:57 crc kubenswrapper[4730]: I0131 16:33:57.934517 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b0b2ddeb-92b9-433f-a71c-c8c113db2805-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7765894ccc-qjhfm\" (UID: \"b0b2ddeb-92b9-433f-a71c-c8c113db2805\") " pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:58 crc kubenswrapper[4730]: I0131 16:33:58.045385 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:58 crc kubenswrapper[4730]: I0131 16:33:58.441184 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7765894ccc-qjhfm"] Jan 31 16:33:58 crc kubenswrapper[4730]: W0131 16:33:58.456981 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0b2ddeb_92b9_433f_a71c_c8c113db2805.slice/crio-e5426df1259e5fc0fc0bd1555178a23306da9d060a4463290a972806317bd5af WatchSource:0}: Error finding container e5426df1259e5fc0fc0bd1555178a23306da9d060a4463290a972806317bd5af: Status 404 returned error can't find the container with id e5426df1259e5fc0fc0bd1555178a23306da9d060a4463290a972806317bd5af Jan 31 16:33:58 crc kubenswrapper[4730]: I0131 16:33:58.475426 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f77d01ac-b8b8-436b-9626-6230af5c95b7" path="/var/lib/kubelet/pods/f77d01ac-b8b8-436b-9626-6230af5c95b7/volumes" Jan 31 16:33:58 crc kubenswrapper[4730]: I0131 16:33:58.832155 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" event={"ID":"b0b2ddeb-92b9-433f-a71c-c8c113db2805","Type":"ContainerStarted","Data":"1d0e02bcabfd6be805d89d520bd607a4cc540f9018fa31117775c8fc10b1515c"} Jan 31 16:33:58 crc kubenswrapper[4730]: I0131 16:33:58.832480 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:33:58 crc kubenswrapper[4730]: I0131 16:33:58.832492 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" event={"ID":"b0b2ddeb-92b9-433f-a71c-c8c113db2805","Type":"ContainerStarted","Data":"e5426df1259e5fc0fc0bd1555178a23306da9d060a4463290a972806317bd5af"} Jan 31 16:33:58 crc kubenswrapper[4730]: I0131 16:33:58.851576 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" podStartSLOduration=33.85156137 podStartE2EDuration="33.85156137s" podCreationTimestamp="2026-01-31 16:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:33:58.848266822 +0000 UTC m=+225.654323728" watchObservedRunningTime="2026-01-31 16:33:58.85156137 +0000 UTC m=+225.657618276" Jan 31 16:33:59 crc kubenswrapper[4730]: I0131 16:33:59.090340 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7765894ccc-qjhfm" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.255105 4730 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.256896 4730 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.256955 4730 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.257013 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.257329 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9" gracePeriod=15 Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.257382 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383" gracePeriod=15 Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.257462 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545" gracePeriod=15 Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.257510 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8" gracePeriod=15 Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.257556 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726" gracePeriod=15 Jan 31 16:34:12 crc kubenswrapper[4730]: E0131 16:34:12.257749 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.257847 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 16:34:12 crc kubenswrapper[4730]: E0131 16:34:12.257927 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.258005 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 16:34:12 crc kubenswrapper[4730]: E0131 16:34:12.258082 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.258140 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 16:34:12 crc kubenswrapper[4730]: E0131 16:34:12.258266 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.258845 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 16:34:12 crc kubenswrapper[4730]: E0131 16:34:12.258871 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.258879 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 16:34:12 crc kubenswrapper[4730]: E0131 16:34:12.258896 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.258901 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 16:34:12 crc kubenswrapper[4730]: E0131 16:34:12.258918 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.258924 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.259740 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.259761 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.259770 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.259783 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.259793 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.259835 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.265252 4730 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.402918 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.403149 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.403177 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.403195 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.403212 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.403226 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.403243 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.403258 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.504820 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.504855 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.504875 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.504888 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.504909 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.504923 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.504978 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.504993 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.505057 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.505087 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.505109 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.505129 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.505147 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.505166 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.505186 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.505205 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.905141 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.906483 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.907315 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383" exitCode=0 Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.907347 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545" exitCode=0 Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.907356 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8" exitCode=0 Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.907366 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726" exitCode=2 Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.907471 4730 scope.go:117] "RemoveContainer" containerID="9e4fd341f08f511eda8b74287c7b8e6ccaab226fbb00b4dcee98b8432b820325" Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.911075 4730 generic.go:334] "Generic (PLEG): container finished" podID="af34c31a-e26b-45f6-abbc-a1b8eafaf409" containerID="ecd93892bb3dd1bbdfc7a7d219244cfb7d3a21cd43e33f6cb4429d4a2e74e444" exitCode=0 Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.911137 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"af34c31a-e26b-45f6-abbc-a1b8eafaf409","Type":"ContainerDied","Data":"ecd93892bb3dd1bbdfc7a7d219244cfb7d3a21cd43e33f6cb4429d4a2e74e444"} Jan 31 16:34:12 crc kubenswrapper[4730]: I0131 16:34:12.912362 4730 status_manager.go:851] "Failed to get status for pod" podUID="af34c31a-e26b-45f6-abbc-a1b8eafaf409" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:13 crc kubenswrapper[4730]: I0131 16:34:13.917349 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 16:34:14 crc kubenswrapper[4730]: E0131 16:34:14.296713 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:14 crc kubenswrapper[4730]: E0131 16:34:14.297193 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:14 crc kubenswrapper[4730]: E0131 16:34:14.297946 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:14 crc kubenswrapper[4730]: E0131 16:34:14.298244 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:14 crc kubenswrapper[4730]: E0131 16:34:14.298502 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.298533 4730 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 31 16:34:14 crc kubenswrapper[4730]: E0131 16:34:14.298771 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="200ms" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.467423 4730 status_manager.go:851] "Failed to get status for pod" podUID="af34c31a-e26b-45f6-abbc-a1b8eafaf409" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:14 crc kubenswrapper[4730]: E0131 16:34:14.554354 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="400ms" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.699153 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.699613 4730 status_manager.go:851] "Failed to get status for pod" podUID="af34c31a-e26b-45f6-abbc-a1b8eafaf409" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.745585 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.747014 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.748518 4730 status_manager.go:851] "Failed to get status for pod" podUID="af34c31a-e26b-45f6-abbc-a1b8eafaf409" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.748890 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.857933 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af34c31a-e26b-45f6-abbc-a1b8eafaf409-kube-api-access\") pod \"af34c31a-e26b-45f6-abbc-a1b8eafaf409\" (UID: \"af34c31a-e26b-45f6-abbc-a1b8eafaf409\") " Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.858002 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.858021 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af34c31a-e26b-45f6-abbc-a1b8eafaf409-kubelet-dir\") pod \"af34c31a-e26b-45f6-abbc-a1b8eafaf409\" (UID: \"af34c31a-e26b-45f6-abbc-a1b8eafaf409\") " Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.858046 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af34c31a-e26b-45f6-abbc-a1b8eafaf409-var-lock\") pod \"af34c31a-e26b-45f6-abbc-a1b8eafaf409\" (UID: \"af34c31a-e26b-45f6-abbc-a1b8eafaf409\") " Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.858080 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.858121 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.858328 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.858366 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.858382 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af34c31a-e26b-45f6-abbc-a1b8eafaf409-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "af34c31a-e26b-45f6-abbc-a1b8eafaf409" (UID: "af34c31a-e26b-45f6-abbc-a1b8eafaf409"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.858398 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af34c31a-e26b-45f6-abbc-a1b8eafaf409-var-lock" (OuterVolumeSpecName: "var-lock") pod "af34c31a-e26b-45f6-abbc-a1b8eafaf409" (UID: "af34c31a-e26b-45f6-abbc-a1b8eafaf409"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.858411 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.862936 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af34c31a-e26b-45f6-abbc-a1b8eafaf409-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "af34c31a-e26b-45f6-abbc-a1b8eafaf409" (UID: "af34c31a-e26b-45f6-abbc-a1b8eafaf409"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.924795 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.926853 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9" exitCode=0 Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.926927 4730 scope.go:117] "RemoveContainer" containerID="e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.927054 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.930199 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"af34c31a-e26b-45f6-abbc-a1b8eafaf409","Type":"ContainerDied","Data":"3424410473f881d1921328b5322b286a018be67a646fc2e9a281faa70d2ef3f3"} Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.930239 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3424410473f881d1921328b5322b286a018be67a646fc2e9a281faa70d2ef3f3" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.930310 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.946825 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.947105 4730 status_manager.go:851] "Failed to get status for pod" podUID="af34c31a-e26b-45f6-abbc-a1b8eafaf409" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.947312 4730 status_manager.go:851] "Failed to get status for pod" podUID="af34c31a-e26b-45f6-abbc-a1b8eafaf409" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.947485 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.951898 4730 scope.go:117] "RemoveContainer" containerID="65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545" Jan 31 16:34:14 crc kubenswrapper[4730]: E0131 16:34:14.955660 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="800ms" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.959426 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af34c31a-e26b-45f6-abbc-a1b8eafaf409-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.959451 4730 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.959463 4730 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af34c31a-e26b-45f6-abbc-a1b8eafaf409-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.959474 4730 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af34c31a-e26b-45f6-abbc-a1b8eafaf409-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.959485 4730 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.959494 4730 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.962820 4730 scope.go:117] "RemoveContainer" containerID="5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.973728 4730 scope.go:117] "RemoveContainer" containerID="30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.984249 4730 scope.go:117] "RemoveContainer" containerID="01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9" Jan 31 16:34:14 crc kubenswrapper[4730]: I0131 16:34:14.997197 4730 scope.go:117] "RemoveContainer" containerID="6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054" Jan 31 16:34:15 crc kubenswrapper[4730]: I0131 16:34:15.013513 4730 scope.go:117] "RemoveContainer" containerID="e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383" Jan 31 16:34:15 crc kubenswrapper[4730]: E0131 16:34:15.013887 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\": container with ID starting with e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383 not found: ID does not exist" containerID="e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383" Jan 31 16:34:15 crc kubenswrapper[4730]: I0131 16:34:15.013926 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383"} err="failed to get container status \"e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\": rpc error: code = NotFound desc = could not find container \"e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383\": container with ID starting with e413d3ca2f9ac40dbe35d716dcbf5b9588d06082738f3478d807772556bde383 not found: ID does not exist" Jan 31 16:34:15 crc kubenswrapper[4730]: I0131 16:34:15.013954 4730 scope.go:117] "RemoveContainer" containerID="65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545" Jan 31 16:34:15 crc kubenswrapper[4730]: E0131 16:34:15.014187 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\": container with ID starting with 65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545 not found: ID does not exist" containerID="65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545" Jan 31 16:34:15 crc kubenswrapper[4730]: I0131 16:34:15.014207 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545"} err="failed to get container status \"65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\": rpc error: code = NotFound desc = could not find container \"65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545\": container with ID starting with 65d906980b788cdfffae6be8e6b7ad8391fadfc7a485e48179d8fca491295545 not found: ID does not exist" Jan 31 16:34:15 crc kubenswrapper[4730]: I0131 16:34:15.014221 4730 scope.go:117] "RemoveContainer" containerID="5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8" Jan 31 16:34:15 crc kubenswrapper[4730]: E0131 16:34:15.014407 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\": container with ID starting with 5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8 not found: ID does not exist" containerID="5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8" Jan 31 16:34:15 crc kubenswrapper[4730]: I0131 16:34:15.014431 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8"} err="failed to get container status \"5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\": rpc error: code = NotFound desc = could not find container \"5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8\": container with ID starting with 5b4ca6a27c68fc72a94bf93aaadc40c024079eee164bc65256484e5198c7e4f8 not found: ID does not exist" Jan 31 16:34:15 crc kubenswrapper[4730]: I0131 16:34:15.014446 4730 scope.go:117] "RemoveContainer" containerID="30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726" Jan 31 16:34:15 crc kubenswrapper[4730]: E0131 16:34:15.014638 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\": container with ID starting with 30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726 not found: ID does not exist" containerID="30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726" Jan 31 16:34:15 crc kubenswrapper[4730]: I0131 16:34:15.014657 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726"} err="failed to get container status \"30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\": rpc error: code = NotFound desc = could not find container \"30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726\": container with ID starting with 30444b708c0226764932b47619ea85b9b22082e04bb4df11fd6e17caff22a726 not found: ID does not exist" Jan 31 16:34:15 crc kubenswrapper[4730]: I0131 16:34:15.014669 4730 scope.go:117] "RemoveContainer" containerID="01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9" Jan 31 16:34:15 crc kubenswrapper[4730]: E0131 16:34:15.014829 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\": container with ID starting with 01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9 not found: ID does not exist" containerID="01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9" Jan 31 16:34:15 crc kubenswrapper[4730]: I0131 16:34:15.014844 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9"} err="failed to get container status \"01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\": rpc error: code = NotFound desc = could not find container \"01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9\": container with ID starting with 01144e7f3b271a989feb51993bf17f4d848d5b77cf751c93155dbccca6b74af9 not found: ID does not exist" Jan 31 16:34:15 crc kubenswrapper[4730]: I0131 16:34:15.014855 4730 scope.go:117] "RemoveContainer" containerID="6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054" Jan 31 16:34:15 crc kubenswrapper[4730]: E0131 16:34:15.015015 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\": container with ID starting with 6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054 not found: ID does not exist" containerID="6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054" Jan 31 16:34:15 crc kubenswrapper[4730]: I0131 16:34:15.015033 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054"} err="failed to get container status \"6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\": rpc error: code = NotFound desc = could not find container \"6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054\": container with ID starting with 6ccefe1c9d94062aeec6ef6eb475b5ad3904dcf2c439fa9180f0e37e3339d054 not found: ID does not exist" Jan 31 16:34:15 crc kubenswrapper[4730]: E0131 16:34:15.756351 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="1.6s" Jan 31 16:34:16 crc kubenswrapper[4730]: I0131 16:34:16.473491 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 31 16:34:17 crc kubenswrapper[4730]: E0131 16:34:17.305012 4730 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:17 crc kubenswrapper[4730]: I0131 16:34:17.306023 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:17 crc kubenswrapper[4730]: W0131 16:34:17.326500 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-8ff9250afe5cc199cf0208b87f420f35e42356d58421d701fefea1cad052e732 WatchSource:0}: Error finding container 8ff9250afe5cc199cf0208b87f420f35e42356d58421d701fefea1cad052e732: Status 404 returned error can't find the container with id 8ff9250afe5cc199cf0208b87f420f35e42356d58421d701fefea1cad052e732 Jan 31 16:34:17 crc kubenswrapper[4730]: E0131 16:34:17.330632 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.64:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fddfd0323bd99 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 16:34:17.329032601 +0000 UTC m=+244.135089507,LastTimestamp:2026-01-31 16:34:17.329032601 +0000 UTC m=+244.135089507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 16:34:17 crc kubenswrapper[4730]: E0131 16:34:17.357519 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="3.2s" Jan 31 16:34:17 crc kubenswrapper[4730]: I0131 16:34:17.945979 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7fd7c6f1b4654d3ab033682b2251c21c4991754155eb4e39923a492cb4f4ea04"} Jan 31 16:34:17 crc kubenswrapper[4730]: I0131 16:34:17.946023 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8ff9250afe5cc199cf0208b87f420f35e42356d58421d701fefea1cad052e732"} Jan 31 16:34:17 crc kubenswrapper[4730]: E0131 16:34:17.946558 4730 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:34:17 crc kubenswrapper[4730]: I0131 16:34:17.946685 4730 status_manager.go:851] "Failed to get status for pod" podUID="af34c31a-e26b-45f6-abbc-a1b8eafaf409" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:19 crc kubenswrapper[4730]: E0131 16:34:19.682747 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.64:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fddfd0323bd99 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 16:34:17.329032601 +0000 UTC m=+244.135089507,LastTimestamp:2026-01-31 16:34:17.329032601 +0000 UTC m=+244.135089507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 16:34:20 crc kubenswrapper[4730]: E0131 16:34:20.558719 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="6.4s" Jan 31 16:34:24 crc kubenswrapper[4730]: I0131 16:34:24.463955 4730 status_manager.go:851] "Failed to get status for pod" podUID="af34c31a-e26b-45f6-abbc-a1b8eafaf409" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:26 crc kubenswrapper[4730]: E0131 16:34:26.959738 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="7s" Jan 31 16:34:27 crc kubenswrapper[4730]: I0131 16:34:27.000045 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 16:34:27 crc kubenswrapper[4730]: I0131 16:34:27.000108 4730 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f" exitCode=1 Jan 31 16:34:27 crc kubenswrapper[4730]: I0131 16:34:27.000148 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f"} Jan 31 16:34:27 crc kubenswrapper[4730]: I0131 16:34:27.000605 4730 scope.go:117] "RemoveContainer" containerID="68fb03e1960635bfab31cc706506b72e3e680e55983f942b1e99b4e0af842a9f" Jan 31 16:34:27 crc kubenswrapper[4730]: I0131 16:34:27.001375 4730 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:27 crc kubenswrapper[4730]: I0131 16:34:27.001627 4730 status_manager.go:851] "Failed to get status for pod" podUID="af34c31a-e26b-45f6-abbc-a1b8eafaf409" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:27 crc kubenswrapper[4730]: I0131 16:34:27.463667 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:27 crc kubenswrapper[4730]: I0131 16:34:27.464713 4730 status_manager.go:851] "Failed to get status for pod" podUID="af34c31a-e26b-45f6-abbc-a1b8eafaf409" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:27 crc kubenswrapper[4730]: I0131 16:34:27.465193 4730 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:27 crc kubenswrapper[4730]: I0131 16:34:27.478195 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a821a82c-cea5-41e2-aa16-abfb02c7e54c" Jan 31 16:34:27 crc kubenswrapper[4730]: I0131 16:34:27.478225 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a821a82c-cea5-41e2-aa16-abfb02c7e54c" Jan 31 16:34:27 crc kubenswrapper[4730]: E0131 16:34:27.478661 4730 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:27 crc kubenswrapper[4730]: I0131 16:34:27.479151 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:27 crc kubenswrapper[4730]: W0131 16:34:27.498720 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-2e53878951dd820d9e7f273ec3d1171e8585d559c9c40fe2fc7eb69ead9b39b9 WatchSource:0}: Error finding container 2e53878951dd820d9e7f273ec3d1171e8585d559c9c40fe2fc7eb69ead9b39b9: Status 404 returned error can't find the container with id 2e53878951dd820d9e7f273ec3d1171e8585d559c9c40fe2fc7eb69ead9b39b9 Jan 31 16:34:28 crc kubenswrapper[4730]: I0131 16:34:28.007515 4730 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="e20a87016e2ead0098428ef9dae22f0b9fa278cb6778fb9153aa6d90b4845482" exitCode=0 Jan 31 16:34:28 crc kubenswrapper[4730]: I0131 16:34:28.007656 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"e20a87016e2ead0098428ef9dae22f0b9fa278cb6778fb9153aa6d90b4845482"} Jan 31 16:34:28 crc kubenswrapper[4730]: I0131 16:34:28.007957 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2e53878951dd820d9e7f273ec3d1171e8585d559c9c40fe2fc7eb69ead9b39b9"} Jan 31 16:34:28 crc kubenswrapper[4730]: I0131 16:34:28.008237 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a821a82c-cea5-41e2-aa16-abfb02c7e54c" Jan 31 16:34:28 crc kubenswrapper[4730]: I0131 16:34:28.008252 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a821a82c-cea5-41e2-aa16-abfb02c7e54c" Jan 31 16:34:28 crc kubenswrapper[4730]: I0131 16:34:28.008779 4730 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:28 crc kubenswrapper[4730]: E0131 16:34:28.008987 4730 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:28 crc kubenswrapper[4730]: I0131 16:34:28.009048 4730 status_manager.go:851] "Failed to get status for pod" podUID="af34c31a-e26b-45f6-abbc-a1b8eafaf409" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:28 crc kubenswrapper[4730]: I0131 16:34:28.012555 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 16:34:28 crc kubenswrapper[4730]: I0131 16:34:28.012614 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7d56bed4347e8f29612c9b0529c9a8add110b6a61d2ef8e984b2f8fc919cd27c"} Jan 31 16:34:28 crc kubenswrapper[4730]: I0131 16:34:28.013661 4730 status_manager.go:851] "Failed to get status for pod" podUID="af34c31a-e26b-45f6-abbc-a1b8eafaf409" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:28 crc kubenswrapper[4730]: I0131 16:34:28.014109 4730 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 31 16:34:29 crc kubenswrapper[4730]: I0131 16:34:29.027646 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"63caf594b7e42e3843f3d0bcf9e29471cb0cb67fac1f72a8ac03fc0bfb15b88d"} Jan 31 16:34:29 crc kubenswrapper[4730]: I0131 16:34:29.028074 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9fb2fb6f2b282e8a0fc390e018c167cf0ef586e4cefb34d268b248aefcfa8cbd"} Jan 31 16:34:29 crc kubenswrapper[4730]: I0131 16:34:29.028087 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"794527515fb4aff44ff1322e3176bbcc76c3e1098c9333f02acfe72ffaf203f4"} Jan 31 16:34:29 crc kubenswrapper[4730]: I0131 16:34:29.028096 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f66dec3f5b05e337057d51f5f5216c05ffe072043e0840646a73449581ee981d"} Jan 31 16:34:30 crc kubenswrapper[4730]: I0131 16:34:30.034520 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"879f7fdb04843bc90809c327a0cb01523c5d59ff0602644fd549eb3690fcb338"} Jan 31 16:34:30 crc kubenswrapper[4730]: I0131 16:34:30.035091 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:30 crc kubenswrapper[4730]: I0131 16:34:30.035205 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a821a82c-cea5-41e2-aa16-abfb02c7e54c" Jan 31 16:34:30 crc kubenswrapper[4730]: I0131 16:34:30.035229 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a821a82c-cea5-41e2-aa16-abfb02c7e54c" Jan 31 16:34:30 crc kubenswrapper[4730]: I0131 16:34:30.972124 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:34:32 crc kubenswrapper[4730]: I0131 16:34:32.480005 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:32 crc kubenswrapper[4730]: I0131 16:34:32.480427 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:32 crc kubenswrapper[4730]: I0131 16:34:32.489823 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:33 crc kubenswrapper[4730]: I0131 16:34:33.211611 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:34:33 crc kubenswrapper[4730]: I0131 16:34:33.227149 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:34:35 crc kubenswrapper[4730]: I0131 16:34:35.042947 4730 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:35 crc kubenswrapper[4730]: I0131 16:34:35.059583 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a821a82c-cea5-41e2-aa16-abfb02c7e54c" Jan 31 16:34:35 crc kubenswrapper[4730]: I0131 16:34:35.059610 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a821a82c-cea5-41e2-aa16-abfb02c7e54c" Jan 31 16:34:35 crc kubenswrapper[4730]: I0131 16:34:35.063348 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:35 crc kubenswrapper[4730]: I0131 16:34:35.116269 4730 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b88d4589-ff98-49cf-b958-b04da090da20" Jan 31 16:34:36 crc kubenswrapper[4730]: I0131 16:34:36.063501 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a821a82c-cea5-41e2-aa16-abfb02c7e54c" Jan 31 16:34:36 crc kubenswrapper[4730]: I0131 16:34:36.063529 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a821a82c-cea5-41e2-aa16-abfb02c7e54c" Jan 31 16:34:36 crc kubenswrapper[4730]: I0131 16:34:36.067593 4730 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b88d4589-ff98-49cf-b958-b04da090da20" Jan 31 16:34:40 crc kubenswrapper[4730]: I0131 16:34:40.983445 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 16:34:44 crc kubenswrapper[4730]: I0131 16:34:44.693576 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 16:34:44 crc kubenswrapper[4730]: I0131 16:34:44.783050 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 16:34:44 crc kubenswrapper[4730]: I0131 16:34:44.811037 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 16:34:45 crc kubenswrapper[4730]: I0131 16:34:45.298104 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 16:34:45 crc kubenswrapper[4730]: I0131 16:34:45.857437 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 16:34:45 crc kubenswrapper[4730]: I0131 16:34:45.863983 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 16:34:45 crc kubenswrapper[4730]: I0131 16:34:45.869525 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 16:34:45 crc kubenswrapper[4730]: I0131 16:34:45.876232 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 16:34:45 crc kubenswrapper[4730]: I0131 16:34:45.879296 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 16:34:45 crc kubenswrapper[4730]: I0131 16:34:45.942744 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 16:34:46 crc kubenswrapper[4730]: I0131 16:34:46.309078 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 16:34:46 crc kubenswrapper[4730]: I0131 16:34:46.565631 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 16:34:46 crc kubenswrapper[4730]: I0131 16:34:46.595516 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 16:34:46 crc kubenswrapper[4730]: I0131 16:34:46.608135 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 16:34:46 crc kubenswrapper[4730]: I0131 16:34:46.673262 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 16:34:46 crc kubenswrapper[4730]: I0131 16:34:46.754730 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 16:34:46 crc kubenswrapper[4730]: I0131 16:34:46.779763 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 16:34:46 crc kubenswrapper[4730]: I0131 16:34:46.901269 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 16:34:46 crc kubenswrapper[4730]: I0131 16:34:46.987834 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.023245 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.107562 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.223347 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.256371 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.277770 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.345634 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.419719 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.456193 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.469681 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.475040 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.508747 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.569440 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.632581 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.663909 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.685748 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.694794 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.724852 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.787268 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.792509 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 16:34:47 crc kubenswrapper[4730]: I0131 16:34:47.842402 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 16:34:48 crc kubenswrapper[4730]: I0131 16:34:48.078518 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 16:34:48 crc kubenswrapper[4730]: I0131 16:34:48.197662 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 16:34:48 crc kubenswrapper[4730]: I0131 16:34:48.295168 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 16:34:48 crc kubenswrapper[4730]: I0131 16:34:48.358960 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 16:34:48 crc kubenswrapper[4730]: I0131 16:34:48.427876 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 16:34:48 crc kubenswrapper[4730]: I0131 16:34:48.612409 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 16:34:48 crc kubenswrapper[4730]: I0131 16:34:48.618122 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 16:34:48 crc kubenswrapper[4730]: I0131 16:34:48.618224 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 16:34:48 crc kubenswrapper[4730]: I0131 16:34:48.672585 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 16:34:48 crc kubenswrapper[4730]: I0131 16:34:48.704712 4730 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 16:34:48 crc kubenswrapper[4730]: I0131 16:34:48.740986 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 16:34:48 crc kubenswrapper[4730]: I0131 16:34:48.753425 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 16:34:48 crc kubenswrapper[4730]: I0131 16:34:48.821288 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 16:34:48 crc kubenswrapper[4730]: I0131 16:34:48.849630 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 16:34:48 crc kubenswrapper[4730]: I0131 16:34:48.919518 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 16:34:48 crc kubenswrapper[4730]: I0131 16:34:48.926488 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 16:34:48 crc kubenswrapper[4730]: I0131 16:34:48.969792 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.105224 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.148772 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.172785 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.287270 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.318121 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.338728 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.403988 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.449843 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.481936 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.500508 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.562251 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.567309 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.569393 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.633971 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.636607 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.699730 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.771630 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.814990 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.904285 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 16:34:49 crc kubenswrapper[4730]: I0131 16:34:49.976793 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.005068 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.043919 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.075756 4730 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.077706 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.083088 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.083168 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.089453 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.107770 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.107745922 podStartE2EDuration="15.107745922s" podCreationTimestamp="2026-01-31 16:34:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:34:50.1033048 +0000 UTC m=+276.909361716" watchObservedRunningTime="2026-01-31 16:34:50.107745922 +0000 UTC m=+276.913802878" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.133830 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.175895 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.245292 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.467698 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.471917 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.477848 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.517751 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.519515 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.560301 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.576891 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.578045 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.606533 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.661625 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.690560 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.695472 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.740084 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.757955 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.769852 4730 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.883486 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.906635 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 16:34:50 crc kubenswrapper[4730]: I0131 16:34:50.995335 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.002028 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.157921 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.165051 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.350848 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.361253 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.435735 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.468186 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.503996 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.526471 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.529634 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.591934 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.672767 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.748163 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.774514 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.797319 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.864596 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.902973 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.911690 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.939141 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.964067 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 16:34:51 crc kubenswrapper[4730]: I0131 16:34:51.968161 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 16:34:52 crc kubenswrapper[4730]: I0131 16:34:52.014902 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 16:34:52 crc kubenswrapper[4730]: I0131 16:34:52.099383 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 16:34:52 crc kubenswrapper[4730]: I0131 16:34:52.247037 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 16:34:52 crc kubenswrapper[4730]: I0131 16:34:52.348069 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 16:34:52 crc kubenswrapper[4730]: I0131 16:34:52.357902 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 16:34:52 crc kubenswrapper[4730]: I0131 16:34:52.441973 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 16:34:52 crc kubenswrapper[4730]: I0131 16:34:52.570072 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 16:34:52 crc kubenswrapper[4730]: I0131 16:34:52.658800 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 16:34:52 crc kubenswrapper[4730]: I0131 16:34:52.735028 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 16:34:52 crc kubenswrapper[4730]: I0131 16:34:52.804146 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 16:34:52 crc kubenswrapper[4730]: I0131 16:34:52.818621 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 16:34:52 crc kubenswrapper[4730]: I0131 16:34:52.854822 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 16:34:52 crc kubenswrapper[4730]: I0131 16:34:52.923690 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 16:34:52 crc kubenswrapper[4730]: I0131 16:34:52.979995 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 16:34:53 crc kubenswrapper[4730]: I0131 16:34:53.027098 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 16:34:53 crc kubenswrapper[4730]: I0131 16:34:53.059937 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 16:34:53 crc kubenswrapper[4730]: I0131 16:34:53.153737 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 16:34:53 crc kubenswrapper[4730]: I0131 16:34:53.159837 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 16:34:53 crc kubenswrapper[4730]: I0131 16:34:53.211444 4730 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 16:34:53 crc kubenswrapper[4730]: I0131 16:34:53.218855 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 16:34:53 crc kubenswrapper[4730]: I0131 16:34:53.228835 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 16:34:53 crc kubenswrapper[4730]: I0131 16:34:53.373101 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 16:34:53 crc kubenswrapper[4730]: I0131 16:34:53.392936 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 16:34:53 crc kubenswrapper[4730]: I0131 16:34:53.528164 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 16:34:53 crc kubenswrapper[4730]: I0131 16:34:53.604099 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 16:34:53 crc kubenswrapper[4730]: I0131 16:34:53.669901 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 16:34:53 crc kubenswrapper[4730]: I0131 16:34:53.678281 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 16:34:53 crc kubenswrapper[4730]: I0131 16:34:53.708636 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 16:34:53 crc kubenswrapper[4730]: I0131 16:34:53.870667 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 16:34:53 crc kubenswrapper[4730]: I0131 16:34:53.946006 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 16:34:53 crc kubenswrapper[4730]: I0131 16:34:53.990253 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 16:34:54 crc kubenswrapper[4730]: I0131 16:34:54.031089 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 16:34:54 crc kubenswrapper[4730]: I0131 16:34:54.155694 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 16:34:54 crc kubenswrapper[4730]: I0131 16:34:54.194578 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 16:34:54 crc kubenswrapper[4730]: I0131 16:34:54.372524 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 16:34:54 crc kubenswrapper[4730]: I0131 16:34:54.494920 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 16:34:54 crc kubenswrapper[4730]: I0131 16:34:54.533667 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 16:34:54 crc kubenswrapper[4730]: I0131 16:34:54.598588 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 16:34:54 crc kubenswrapper[4730]: I0131 16:34:54.652275 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 16:34:54 crc kubenswrapper[4730]: I0131 16:34:54.800051 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 16:34:54 crc kubenswrapper[4730]: I0131 16:34:54.855955 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 16:34:54 crc kubenswrapper[4730]: I0131 16:34:54.899982 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 16:34:54 crc kubenswrapper[4730]: I0131 16:34:54.907114 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 16:34:54 crc kubenswrapper[4730]: I0131 16:34:54.916901 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 16:34:54 crc kubenswrapper[4730]: I0131 16:34:54.937182 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 16:34:54 crc kubenswrapper[4730]: I0131 16:34:54.996976 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.001162 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.040845 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.088865 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.099281 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.115522 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.117742 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.139722 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.223705 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.227723 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.242882 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.250447 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.320488 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.343082 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.363655 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.374711 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.383388 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.442832 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.448859 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.478969 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.529558 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.690721 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.819174 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.879209 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.903878 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 16:34:55 crc kubenswrapper[4730]: I0131 16:34:55.928661 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 16:34:56 crc kubenswrapper[4730]: I0131 16:34:56.234483 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 16:34:56 crc kubenswrapper[4730]: I0131 16:34:56.311254 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 16:34:56 crc kubenswrapper[4730]: I0131 16:34:56.333540 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 16:34:56 crc kubenswrapper[4730]: I0131 16:34:56.413414 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 16:34:56 crc kubenswrapper[4730]: I0131 16:34:56.455582 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 16:34:56 crc kubenswrapper[4730]: I0131 16:34:56.486171 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 16:34:56 crc kubenswrapper[4730]: I0131 16:34:56.566247 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 16:34:56 crc kubenswrapper[4730]: I0131 16:34:56.682517 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 16:34:56 crc kubenswrapper[4730]: I0131 16:34:56.731769 4730 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 16:34:56 crc kubenswrapper[4730]: I0131 16:34:56.732495 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 16:34:56 crc kubenswrapper[4730]: I0131 16:34:56.838168 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 16:34:56 crc kubenswrapper[4730]: I0131 16:34:56.857917 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 16:34:56 crc kubenswrapper[4730]: I0131 16:34:56.883790 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 16:34:56 crc kubenswrapper[4730]: I0131 16:34:56.933167 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 16:34:57 crc kubenswrapper[4730]: I0131 16:34:57.061105 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 16:34:57 crc kubenswrapper[4730]: I0131 16:34:57.098107 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 16:34:57 crc kubenswrapper[4730]: I0131 16:34:57.101263 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 16:34:57 crc kubenswrapper[4730]: I0131 16:34:57.147752 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 16:34:57 crc kubenswrapper[4730]: I0131 16:34:57.277844 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 16:34:57 crc kubenswrapper[4730]: I0131 16:34:57.507048 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 16:34:57 crc kubenswrapper[4730]: I0131 16:34:57.522885 4730 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 16:34:57 crc kubenswrapper[4730]: I0131 16:34:57.523070 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://7fd7c6f1b4654d3ab033682b2251c21c4991754155eb4e39923a492cb4f4ea04" gracePeriod=5 Jan 31 16:34:57 crc kubenswrapper[4730]: I0131 16:34:57.674872 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 16:34:57 crc kubenswrapper[4730]: I0131 16:34:57.711164 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 16:34:57 crc kubenswrapper[4730]: I0131 16:34:57.819732 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 16:34:58 crc kubenswrapper[4730]: I0131 16:34:58.018488 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 16:34:58 crc kubenswrapper[4730]: I0131 16:34:58.022718 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 16:34:58 crc kubenswrapper[4730]: I0131 16:34:58.089196 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 16:34:58 crc kubenswrapper[4730]: I0131 16:34:58.105500 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 16:34:58 crc kubenswrapper[4730]: I0131 16:34:58.331961 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 16:34:58 crc kubenswrapper[4730]: I0131 16:34:58.368088 4730 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 16:34:58 crc kubenswrapper[4730]: I0131 16:34:58.370223 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 16:34:58 crc kubenswrapper[4730]: I0131 16:34:58.528970 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 16:34:58 crc kubenswrapper[4730]: I0131 16:34:58.549079 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 16:34:58 crc kubenswrapper[4730]: I0131 16:34:58.720739 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 16:34:58 crc kubenswrapper[4730]: I0131 16:34:58.722917 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 16:34:58 crc kubenswrapper[4730]: I0131 16:34:58.751436 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 16:34:58 crc kubenswrapper[4730]: I0131 16:34:58.752377 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 16:34:58 crc kubenswrapper[4730]: I0131 16:34:58.798744 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 16:34:58 crc kubenswrapper[4730]: I0131 16:34:58.823563 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 16:34:58 crc kubenswrapper[4730]: I0131 16:34:58.862855 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.101323 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.254697 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.260870 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.457812 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.527079 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xwsps"] Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.527330 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xwsps" podUID="e8d7fc22-9a5c-4569-821d-c915ab1f5657" containerName="registry-server" containerID="cri-o://13f50af20d783513e9aa50d53f19b585ce3471d671188eb22b89320bd474d3a2" gracePeriod=30 Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.537684 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7jq8n"] Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.538219 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7jq8n" podUID="24e875c6-16c4-43f2-8533-7d1af60844fb" containerName="registry-server" containerID="cri-o://5d8551340c944b4b3c72f8ecee713510a5a5c925dd652f49bf591e3f09bbfc3e" gracePeriod=30 Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.553439 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-txbq6"] Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.553636 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" podUID="d6d0cf39-4835-4f5d-8c5a-9521331913ac" containerName="marketplace-operator" containerID="cri-o://fdc2f6478d2d8b66745ef7cab46b6aec9fa0a64248ace20bc382a6224c540f69" gracePeriod=30 Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.565023 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7c9rs"] Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.565395 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7c9rs" podUID="d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5" containerName="registry-server" containerID="cri-o://b8f1fb615808aa7970b58b617b72a2308b86231fb8a13a9fcce83526c6b6c0f2" gracePeriod=30 Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.587114 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f78ml"] Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.587658 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f78ml" podUID="01ab894a-0ddc-46a2-8027-96606aae9396" containerName="registry-server" containerID="cri-o://153610c5703cf23d40908a3998979215ea0e6975ede6fa177b3559a2a3a4be40" gracePeriod=30 Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.588836 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.615166 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7c7m8"] Jan 31 16:34:59 crc kubenswrapper[4730]: E0131 16:34:59.615360 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.615372 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 16:34:59 crc kubenswrapper[4730]: E0131 16:34:59.615382 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af34c31a-e26b-45f6-abbc-a1b8eafaf409" containerName="installer" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.615387 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="af34c31a-e26b-45f6-abbc-a1b8eafaf409" containerName="installer" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.615498 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.615511 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="af34c31a-e26b-45f6-abbc-a1b8eafaf409" containerName="installer" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.615862 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7c7m8" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.635126 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7c7m8"] Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.686183 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.708147 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a8f25085-b681-4c8d-a35e-363253891c50-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7c7m8\" (UID: \"a8f25085-b681-4c8d-a35e-363253891c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-7c7m8" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.708248 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8f25085-b681-4c8d-a35e-363253891c50-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7c7m8\" (UID: \"a8f25085-b681-4c8d-a35e-363253891c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-7c7m8" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.708274 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5mqp\" (UniqueName: \"kubernetes.io/projected/a8f25085-b681-4c8d-a35e-363253891c50-kube-api-access-c5mqp\") pod \"marketplace-operator-79b997595-7c7m8\" (UID: \"a8f25085-b681-4c8d-a35e-363253891c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-7c7m8" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.729140 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.749154 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.749306 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.810045 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8f25085-b681-4c8d-a35e-363253891c50-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7c7m8\" (UID: \"a8f25085-b681-4c8d-a35e-363253891c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-7c7m8" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.810384 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5mqp\" (UniqueName: \"kubernetes.io/projected/a8f25085-b681-4c8d-a35e-363253891c50-kube-api-access-c5mqp\") pod \"marketplace-operator-79b997595-7c7m8\" (UID: \"a8f25085-b681-4c8d-a35e-363253891c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-7c7m8" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.810427 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a8f25085-b681-4c8d-a35e-363253891c50-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7c7m8\" (UID: \"a8f25085-b681-4c8d-a35e-363253891c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-7c7m8" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.813038 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8f25085-b681-4c8d-a35e-363253891c50-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7c7m8\" (UID: \"a8f25085-b681-4c8d-a35e-363253891c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-7c7m8" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.819609 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a8f25085-b681-4c8d-a35e-363253891c50-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7c7m8\" (UID: \"a8f25085-b681-4c8d-a35e-363253891c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-7c7m8" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.836914 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5mqp\" (UniqueName: \"kubernetes.io/projected/a8f25085-b681-4c8d-a35e-363253891c50-kube-api-access-c5mqp\") pod \"marketplace-operator-79b997595-7c7m8\" (UID: \"a8f25085-b681-4c8d-a35e-363253891c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-7c7m8" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.912637 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7c7m8" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.925138 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.971914 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jq8n" Jan 31 16:34:59 crc kubenswrapper[4730]: I0131 16:34:59.995206 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.024680 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7c9rs" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.112854 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d9x9\" (UniqueName: \"kubernetes.io/projected/24e875c6-16c4-43f2-8533-7d1af60844fb-kube-api-access-8d9x9\") pod \"24e875c6-16c4-43f2-8533-7d1af60844fb\" (UID: \"24e875c6-16c4-43f2-8533-7d1af60844fb\") " Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.112927 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24e875c6-16c4-43f2-8533-7d1af60844fb-catalog-content\") pod \"24e875c6-16c4-43f2-8533-7d1af60844fb\" (UID: \"24e875c6-16c4-43f2-8533-7d1af60844fb\") " Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.113034 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24e875c6-16c4-43f2-8533-7d1af60844fb-utilities\") pod \"24e875c6-16c4-43f2-8533-7d1af60844fb\" (UID: \"24e875c6-16c4-43f2-8533-7d1af60844fb\") " Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.114154 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24e875c6-16c4-43f2-8533-7d1af60844fb-utilities" (OuterVolumeSpecName: "utilities") pod "24e875c6-16c4-43f2-8533-7d1af60844fb" (UID: "24e875c6-16c4-43f2-8533-7d1af60844fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.114596 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.120310 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.122424 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e875c6-16c4-43f2-8533-7d1af60844fb-kube-api-access-8d9x9" (OuterVolumeSpecName: "kube-api-access-8d9x9") pod "24e875c6-16c4-43f2-8533-7d1af60844fb" (UID: "24e875c6-16c4-43f2-8533-7d1af60844fb"). InnerVolumeSpecName "kube-api-access-8d9x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.166508 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwsps" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.201156 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24e875c6-16c4-43f2-8533-7d1af60844fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24e875c6-16c4-43f2-8533-7d1af60844fb" (UID: "24e875c6-16c4-43f2-8533-7d1af60844fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.213714 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-catalog-content\") pod \"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5\" (UID: \"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5\") " Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.214026 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-utilities\") pod \"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5\" (UID: \"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5\") " Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.214087 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmgpt\" (UniqueName: \"kubernetes.io/projected/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-kube-api-access-xmgpt\") pod \"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5\" (UID: \"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5\") " Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.214350 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24e875c6-16c4-43f2-8533-7d1af60844fb-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.214361 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d9x9\" (UniqueName: \"kubernetes.io/projected/24e875c6-16c4-43f2-8533-7d1af60844fb-kube-api-access-8d9x9\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.214371 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24e875c6-16c4-43f2-8533-7d1af60844fb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.215687 4730 generic.go:334] "Generic (PLEG): container finished" podID="d6d0cf39-4835-4f5d-8c5a-9521331913ac" containerID="fdc2f6478d2d8b66745ef7cab46b6aec9fa0a64248ace20bc382a6224c540f69" exitCode=0 Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.215845 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" event={"ID":"d6d0cf39-4835-4f5d-8c5a-9521331913ac","Type":"ContainerDied","Data":"fdc2f6478d2d8b66745ef7cab46b6aec9fa0a64248ace20bc382a6224c540f69"} Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.215945 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" event={"ID":"d6d0cf39-4835-4f5d-8c5a-9521331913ac","Type":"ContainerDied","Data":"145ef8c915165b571490b4e9b80525d466762bece7835f637e874248074aadf3"} Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.216030 4730 scope.go:117] "RemoveContainer" containerID="fdc2f6478d2d8b66745ef7cab46b6aec9fa0a64248ace20bc382a6224c540f69" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.216324 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-txbq6" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.218266 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-utilities" (OuterVolumeSpecName: "utilities") pod "d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5" (UID: "d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.218668 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f78ml" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.220005 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-kube-api-access-xmgpt" (OuterVolumeSpecName: "kube-api-access-xmgpt") pod "d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5" (UID: "d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5"). InnerVolumeSpecName "kube-api-access-xmgpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.233549 4730 generic.go:334] "Generic (PLEG): container finished" podID="01ab894a-0ddc-46a2-8027-96606aae9396" containerID="153610c5703cf23d40908a3998979215ea0e6975ede6fa177b3559a2a3a4be40" exitCode=0 Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.233618 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f78ml" event={"ID":"01ab894a-0ddc-46a2-8027-96606aae9396","Type":"ContainerDied","Data":"153610c5703cf23d40908a3998979215ea0e6975ede6fa177b3559a2a3a4be40"} Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.233643 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f78ml" event={"ID":"01ab894a-0ddc-46a2-8027-96606aae9396","Type":"ContainerDied","Data":"039ad1a66e08c2af06ea29cc46d20edfe817d2205104e959c1c1d02455a4ce3e"} Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.235089 4730 scope.go:117] "RemoveContainer" containerID="fdc2f6478d2d8b66745ef7cab46b6aec9fa0a64248ace20bc382a6224c540f69" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.235784 4730 generic.go:334] "Generic (PLEG): container finished" podID="e8d7fc22-9a5c-4569-821d-c915ab1f5657" containerID="13f50af20d783513e9aa50d53f19b585ce3471d671188eb22b89320bd474d3a2" exitCode=0 Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.235840 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwsps" event={"ID":"e8d7fc22-9a5c-4569-821d-c915ab1f5657","Type":"ContainerDied","Data":"13f50af20d783513e9aa50d53f19b585ce3471d671188eb22b89320bd474d3a2"} Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.235854 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwsps" event={"ID":"e8d7fc22-9a5c-4569-821d-c915ab1f5657","Type":"ContainerDied","Data":"d32e6d6aae1c290bc0d67b957f32c8090fa6732f34905e6ef2d56871c1a3d4c5"} Jan 31 16:35:00 crc kubenswrapper[4730]: E0131 16:35:00.235899 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdc2f6478d2d8b66745ef7cab46b6aec9fa0a64248ace20bc382a6224c540f69\": container with ID starting with fdc2f6478d2d8b66745ef7cab46b6aec9fa0a64248ace20bc382a6224c540f69 not found: ID does not exist" containerID="fdc2f6478d2d8b66745ef7cab46b6aec9fa0a64248ace20bc382a6224c540f69" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.235920 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc2f6478d2d8b66745ef7cab46b6aec9fa0a64248ace20bc382a6224c540f69"} err="failed to get container status \"fdc2f6478d2d8b66745ef7cab46b6aec9fa0a64248ace20bc382a6224c540f69\": rpc error: code = NotFound desc = could not find container \"fdc2f6478d2d8b66745ef7cab46b6aec9fa0a64248ace20bc382a6224c540f69\": container with ID starting with fdc2f6478d2d8b66745ef7cab46b6aec9fa0a64248ace20bc382a6224c540f69 not found: ID does not exist" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.235936 4730 scope.go:117] "RemoveContainer" containerID="153610c5703cf23d40908a3998979215ea0e6975ede6fa177b3559a2a3a4be40" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.236020 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwsps" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.247838 4730 generic.go:334] "Generic (PLEG): container finished" podID="24e875c6-16c4-43f2-8533-7d1af60844fb" containerID="5d8551340c944b4b3c72f8ecee713510a5a5c925dd652f49bf591e3f09bbfc3e" exitCode=0 Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.247920 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jq8n" event={"ID":"24e875c6-16c4-43f2-8533-7d1af60844fb","Type":"ContainerDied","Data":"5d8551340c944b4b3c72f8ecee713510a5a5c925dd652f49bf591e3f09bbfc3e"} Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.247945 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jq8n" event={"ID":"24e875c6-16c4-43f2-8533-7d1af60844fb","Type":"ContainerDied","Data":"cf25a575ca568bb81c5506910efc7f11b523b95e68ae18a96121e7e36e8def33"} Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.248010 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jq8n" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.249435 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5" (UID: "d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.253197 4730 generic.go:334] "Generic (PLEG): container finished" podID="d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5" containerID="b8f1fb615808aa7970b58b617b72a2308b86231fb8a13a9fcce83526c6b6c0f2" exitCode=0 Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.253383 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c9rs" event={"ID":"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5","Type":"ContainerDied","Data":"b8f1fb615808aa7970b58b617b72a2308b86231fb8a13a9fcce83526c6b6c0f2"} Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.253410 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c9rs" event={"ID":"d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5","Type":"ContainerDied","Data":"5d66e1f48b35199b1abb4265e3e43f1c2709d8d39c3821d9cdab75e6118d1737"} Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.253468 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7c9rs" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.259986 4730 scope.go:117] "RemoveContainer" containerID="c8daf890bc4db88ca69f9c6cf587e58d35d938200fa3a8b3a06a4c49965be31e" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.281361 4730 scope.go:117] "RemoveContainer" containerID="19b03747fb64a8bc841e0bee5a8e92853434d16e6f42718988c1d4dd5a296596" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.293051 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7jq8n"] Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.299655 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7jq8n"] Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.302285 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7c7m8"] Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.302896 4730 scope.go:117] "RemoveContainer" containerID="153610c5703cf23d40908a3998979215ea0e6975ede6fa177b3559a2a3a4be40" Jan 31 16:35:00 crc kubenswrapper[4730]: E0131 16:35:00.303439 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"153610c5703cf23d40908a3998979215ea0e6975ede6fa177b3559a2a3a4be40\": container with ID starting with 153610c5703cf23d40908a3998979215ea0e6975ede6fa177b3559a2a3a4be40 not found: ID does not exist" containerID="153610c5703cf23d40908a3998979215ea0e6975ede6fa177b3559a2a3a4be40" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.303463 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"153610c5703cf23d40908a3998979215ea0e6975ede6fa177b3559a2a3a4be40"} err="failed to get container status \"153610c5703cf23d40908a3998979215ea0e6975ede6fa177b3559a2a3a4be40\": rpc error: code = NotFound desc = could not find container \"153610c5703cf23d40908a3998979215ea0e6975ede6fa177b3559a2a3a4be40\": container with ID starting with 153610c5703cf23d40908a3998979215ea0e6975ede6fa177b3559a2a3a4be40 not found: ID does not exist" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.303486 4730 scope.go:117] "RemoveContainer" containerID="c8daf890bc4db88ca69f9c6cf587e58d35d938200fa3a8b3a06a4c49965be31e" Jan 31 16:35:00 crc kubenswrapper[4730]: E0131 16:35:00.303944 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8daf890bc4db88ca69f9c6cf587e58d35d938200fa3a8b3a06a4c49965be31e\": container with ID starting with c8daf890bc4db88ca69f9c6cf587e58d35d938200fa3a8b3a06a4c49965be31e not found: ID does not exist" containerID="c8daf890bc4db88ca69f9c6cf587e58d35d938200fa3a8b3a06a4c49965be31e" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.303965 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8daf890bc4db88ca69f9c6cf587e58d35d938200fa3a8b3a06a4c49965be31e"} err="failed to get container status \"c8daf890bc4db88ca69f9c6cf587e58d35d938200fa3a8b3a06a4c49965be31e\": rpc error: code = NotFound desc = could not find container \"c8daf890bc4db88ca69f9c6cf587e58d35d938200fa3a8b3a06a4c49965be31e\": container with ID starting with c8daf890bc4db88ca69f9c6cf587e58d35d938200fa3a8b3a06a4c49965be31e not found: ID does not exist" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.303979 4730 scope.go:117] "RemoveContainer" containerID="19b03747fb64a8bc841e0bee5a8e92853434d16e6f42718988c1d4dd5a296596" Jan 31 16:35:00 crc kubenswrapper[4730]: E0131 16:35:00.304316 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19b03747fb64a8bc841e0bee5a8e92853434d16e6f42718988c1d4dd5a296596\": container with ID starting with 19b03747fb64a8bc841e0bee5a8e92853434d16e6f42718988c1d4dd5a296596 not found: ID does not exist" containerID="19b03747fb64a8bc841e0bee5a8e92853434d16e6f42718988c1d4dd5a296596" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.304332 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19b03747fb64a8bc841e0bee5a8e92853434d16e6f42718988c1d4dd5a296596"} err="failed to get container status \"19b03747fb64a8bc841e0bee5a8e92853434d16e6f42718988c1d4dd5a296596\": rpc error: code = NotFound desc = could not find container \"19b03747fb64a8bc841e0bee5a8e92853434d16e6f42718988c1d4dd5a296596\": container with ID starting with 19b03747fb64a8bc841e0bee5a8e92853434d16e6f42718988c1d4dd5a296596 not found: ID does not exist" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.304343 4730 scope.go:117] "RemoveContainer" containerID="13f50af20d783513e9aa50d53f19b585ce3471d671188eb22b89320bd474d3a2" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.312434 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7c9rs"] Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.315725 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d6d0cf39-4835-4f5d-8c5a-9521331913ac-marketplace-operator-metrics\") pod \"d6d0cf39-4835-4f5d-8c5a-9521331913ac\" (UID: \"d6d0cf39-4835-4f5d-8c5a-9521331913ac\") " Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.316131 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnbb5\" (UniqueName: \"kubernetes.io/projected/e8d7fc22-9a5c-4569-821d-c915ab1f5657-kube-api-access-qnbb5\") pod \"e8d7fc22-9a5c-4569-821d-c915ab1f5657\" (UID: \"e8d7fc22-9a5c-4569-821d-c915ab1f5657\") " Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.316352 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d7fc22-9a5c-4569-821d-c915ab1f5657-catalog-content\") pod \"e8d7fc22-9a5c-4569-821d-c915ab1f5657\" (UID: \"e8d7fc22-9a5c-4569-821d-c915ab1f5657\") " Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.321063 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7c9rs"] Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.323184 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d7fc22-9a5c-4569-821d-c915ab1f5657-utilities\") pod \"e8d7fc22-9a5c-4569-821d-c915ab1f5657\" (UID: \"e8d7fc22-9a5c-4569-821d-c915ab1f5657\") " Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.323346 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wq8f\" (UniqueName: \"kubernetes.io/projected/d6d0cf39-4835-4f5d-8c5a-9521331913ac-kube-api-access-7wq8f\") pod \"d6d0cf39-4835-4f5d-8c5a-9521331913ac\" (UID: \"d6d0cf39-4835-4f5d-8c5a-9521331913ac\") " Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.323449 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6d0cf39-4835-4f5d-8c5a-9521331913ac-marketplace-trusted-ca\") pod \"d6d0cf39-4835-4f5d-8c5a-9521331913ac\" (UID: \"d6d0cf39-4835-4f5d-8c5a-9521331913ac\") " Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.323879 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6d0cf39-4835-4f5d-8c5a-9521331913ac-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d6d0cf39-4835-4f5d-8c5a-9521331913ac" (UID: "d6d0cf39-4835-4f5d-8c5a-9521331913ac"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.324493 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d7fc22-9a5c-4569-821d-c915ab1f5657-kube-api-access-qnbb5" (OuterVolumeSpecName: "kube-api-access-qnbb5") pod "e8d7fc22-9a5c-4569-821d-c915ab1f5657" (UID: "e8d7fc22-9a5c-4569-821d-c915ab1f5657"). InnerVolumeSpecName "kube-api-access-qnbb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.325284 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d7fc22-9a5c-4569-821d-c915ab1f5657-utilities" (OuterVolumeSpecName: "utilities") pod "e8d7fc22-9a5c-4569-821d-c915ab1f5657" (UID: "e8d7fc22-9a5c-4569-821d-c915ab1f5657"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.325693 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6d0cf39-4835-4f5d-8c5a-9521331913ac-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d6d0cf39-4835-4f5d-8c5a-9521331913ac" (UID: "d6d0cf39-4835-4f5d-8c5a-9521331913ac"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.325865 4730 scope.go:117] "RemoveContainer" containerID="9789310710cda53b84f34b92a43677cb32d494eceb5885cbc5f2e9f939a666f4" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.330748 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6d0cf39-4835-4f5d-8c5a-9521331913ac-kube-api-access-7wq8f" (OuterVolumeSpecName: "kube-api-access-7wq8f") pod "d6d0cf39-4835-4f5d-8c5a-9521331913ac" (UID: "d6d0cf39-4835-4f5d-8c5a-9521331913ac"). InnerVolumeSpecName "kube-api-access-7wq8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.332375 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.335311 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.335374 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmgpt\" (UniqueName: \"kubernetes.io/projected/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5-kube-api-access-xmgpt\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.351018 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.370007 4730 scope.go:117] "RemoveContainer" containerID="ebef7ca4aaf8d3200904173efd81eeea44d0eebd2daff37cf9274dfffdd53952" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.385147 4730 scope.go:117] "RemoveContainer" containerID="13f50af20d783513e9aa50d53f19b585ce3471d671188eb22b89320bd474d3a2" Jan 31 16:35:00 crc kubenswrapper[4730]: E0131 16:35:00.385497 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13f50af20d783513e9aa50d53f19b585ce3471d671188eb22b89320bd474d3a2\": container with ID starting with 13f50af20d783513e9aa50d53f19b585ce3471d671188eb22b89320bd474d3a2 not found: ID does not exist" containerID="13f50af20d783513e9aa50d53f19b585ce3471d671188eb22b89320bd474d3a2" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.385539 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13f50af20d783513e9aa50d53f19b585ce3471d671188eb22b89320bd474d3a2"} err="failed to get container status \"13f50af20d783513e9aa50d53f19b585ce3471d671188eb22b89320bd474d3a2\": rpc error: code = NotFound desc = could not find container \"13f50af20d783513e9aa50d53f19b585ce3471d671188eb22b89320bd474d3a2\": container with ID starting with 13f50af20d783513e9aa50d53f19b585ce3471d671188eb22b89320bd474d3a2 not found: ID does not exist" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.385588 4730 scope.go:117] "RemoveContainer" containerID="9789310710cda53b84f34b92a43677cb32d494eceb5885cbc5f2e9f939a666f4" Jan 31 16:35:00 crc kubenswrapper[4730]: E0131 16:35:00.385876 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9789310710cda53b84f34b92a43677cb32d494eceb5885cbc5f2e9f939a666f4\": container with ID starting with 9789310710cda53b84f34b92a43677cb32d494eceb5885cbc5f2e9f939a666f4 not found: ID does not exist" containerID="9789310710cda53b84f34b92a43677cb32d494eceb5885cbc5f2e9f939a666f4" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.385907 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9789310710cda53b84f34b92a43677cb32d494eceb5885cbc5f2e9f939a666f4"} err="failed to get container status \"9789310710cda53b84f34b92a43677cb32d494eceb5885cbc5f2e9f939a666f4\": rpc error: code = NotFound desc = could not find container \"9789310710cda53b84f34b92a43677cb32d494eceb5885cbc5f2e9f939a666f4\": container with ID starting with 9789310710cda53b84f34b92a43677cb32d494eceb5885cbc5f2e9f939a666f4 not found: ID does not exist" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.385928 4730 scope.go:117] "RemoveContainer" containerID="ebef7ca4aaf8d3200904173efd81eeea44d0eebd2daff37cf9274dfffdd53952" Jan 31 16:35:00 crc kubenswrapper[4730]: E0131 16:35:00.386253 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebef7ca4aaf8d3200904173efd81eeea44d0eebd2daff37cf9274dfffdd53952\": container with ID starting with ebef7ca4aaf8d3200904173efd81eeea44d0eebd2daff37cf9274dfffdd53952 not found: ID does not exist" containerID="ebef7ca4aaf8d3200904173efd81eeea44d0eebd2daff37cf9274dfffdd53952" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.386291 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebef7ca4aaf8d3200904173efd81eeea44d0eebd2daff37cf9274dfffdd53952"} err="failed to get container status \"ebef7ca4aaf8d3200904173efd81eeea44d0eebd2daff37cf9274dfffdd53952\": rpc error: code = NotFound desc = could not find container \"ebef7ca4aaf8d3200904173efd81eeea44d0eebd2daff37cf9274dfffdd53952\": container with ID starting with ebef7ca4aaf8d3200904173efd81eeea44d0eebd2daff37cf9274dfffdd53952 not found: ID does not exist" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.386310 4730 scope.go:117] "RemoveContainer" containerID="5d8551340c944b4b3c72f8ecee713510a5a5c925dd652f49bf591e3f09bbfc3e" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.389721 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d7fc22-9a5c-4569-821d-c915ab1f5657-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8d7fc22-9a5c-4569-821d-c915ab1f5657" (UID: "e8d7fc22-9a5c-4569-821d-c915ab1f5657"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.401243 4730 scope.go:117] "RemoveContainer" containerID="8133a08573e0dc163fdd5c5f23d95603b3f362d9dfab955c0dcfbee9b95e017a" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.417666 4730 scope.go:117] "RemoveContainer" containerID="53499e1e0c0d1131208869970045fda7fe05eb66a38fb0c758f8c0c5fba4593a" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.432325 4730 scope.go:117] "RemoveContainer" containerID="5d8551340c944b4b3c72f8ecee713510a5a5c925dd652f49bf591e3f09bbfc3e" Jan 31 16:35:00 crc kubenswrapper[4730]: E0131 16:35:00.433181 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d8551340c944b4b3c72f8ecee713510a5a5c925dd652f49bf591e3f09bbfc3e\": container with ID starting with 5d8551340c944b4b3c72f8ecee713510a5a5c925dd652f49bf591e3f09bbfc3e not found: ID does not exist" containerID="5d8551340c944b4b3c72f8ecee713510a5a5c925dd652f49bf591e3f09bbfc3e" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.433208 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d8551340c944b4b3c72f8ecee713510a5a5c925dd652f49bf591e3f09bbfc3e"} err="failed to get container status \"5d8551340c944b4b3c72f8ecee713510a5a5c925dd652f49bf591e3f09bbfc3e\": rpc error: code = NotFound desc = could not find container \"5d8551340c944b4b3c72f8ecee713510a5a5c925dd652f49bf591e3f09bbfc3e\": container with ID starting with 5d8551340c944b4b3c72f8ecee713510a5a5c925dd652f49bf591e3f09bbfc3e not found: ID does not exist" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.433228 4730 scope.go:117] "RemoveContainer" containerID="8133a08573e0dc163fdd5c5f23d95603b3f362d9dfab955c0dcfbee9b95e017a" Jan 31 16:35:00 crc kubenswrapper[4730]: E0131 16:35:00.433445 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8133a08573e0dc163fdd5c5f23d95603b3f362d9dfab955c0dcfbee9b95e017a\": container with ID starting with 8133a08573e0dc163fdd5c5f23d95603b3f362d9dfab955c0dcfbee9b95e017a not found: ID does not exist" containerID="8133a08573e0dc163fdd5c5f23d95603b3f362d9dfab955c0dcfbee9b95e017a" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.433468 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8133a08573e0dc163fdd5c5f23d95603b3f362d9dfab955c0dcfbee9b95e017a"} err="failed to get container status \"8133a08573e0dc163fdd5c5f23d95603b3f362d9dfab955c0dcfbee9b95e017a\": rpc error: code = NotFound desc = could not find container \"8133a08573e0dc163fdd5c5f23d95603b3f362d9dfab955c0dcfbee9b95e017a\": container with ID starting with 8133a08573e0dc163fdd5c5f23d95603b3f362d9dfab955c0dcfbee9b95e017a not found: ID does not exist" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.433483 4730 scope.go:117] "RemoveContainer" containerID="53499e1e0c0d1131208869970045fda7fe05eb66a38fb0c758f8c0c5fba4593a" Jan 31 16:35:00 crc kubenswrapper[4730]: E0131 16:35:00.433702 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53499e1e0c0d1131208869970045fda7fe05eb66a38fb0c758f8c0c5fba4593a\": container with ID starting with 53499e1e0c0d1131208869970045fda7fe05eb66a38fb0c758f8c0c5fba4593a not found: ID does not exist" containerID="53499e1e0c0d1131208869970045fda7fe05eb66a38fb0c758f8c0c5fba4593a" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.433721 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53499e1e0c0d1131208869970045fda7fe05eb66a38fb0c758f8c0c5fba4593a"} err="failed to get container status \"53499e1e0c0d1131208869970045fda7fe05eb66a38fb0c758f8c0c5fba4593a\": rpc error: code = NotFound desc = could not find container \"53499e1e0c0d1131208869970045fda7fe05eb66a38fb0c758f8c0c5fba4593a\": container with ID starting with 53499e1e0c0d1131208869970045fda7fe05eb66a38fb0c758f8c0c5fba4593a not found: ID does not exist" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.433736 4730 scope.go:117] "RemoveContainer" containerID="b8f1fb615808aa7970b58b617b72a2308b86231fb8a13a9fcce83526c6b6c0f2" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.436428 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01ab894a-0ddc-46a2-8027-96606aae9396-utilities\") pod \"01ab894a-0ddc-46a2-8027-96606aae9396\" (UID: \"01ab894a-0ddc-46a2-8027-96606aae9396\") " Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.436497 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjnb6\" (UniqueName: \"kubernetes.io/projected/01ab894a-0ddc-46a2-8027-96606aae9396-kube-api-access-zjnb6\") pod \"01ab894a-0ddc-46a2-8027-96606aae9396\" (UID: \"01ab894a-0ddc-46a2-8027-96606aae9396\") " Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.436553 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01ab894a-0ddc-46a2-8027-96606aae9396-catalog-content\") pod \"01ab894a-0ddc-46a2-8027-96606aae9396\" (UID: \"01ab894a-0ddc-46a2-8027-96606aae9396\") " Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.436761 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d7fc22-9a5c-4569-821d-c915ab1f5657-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.436783 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d7fc22-9a5c-4569-821d-c915ab1f5657-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.436809 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wq8f\" (UniqueName: \"kubernetes.io/projected/d6d0cf39-4835-4f5d-8c5a-9521331913ac-kube-api-access-7wq8f\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.436824 4730 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6d0cf39-4835-4f5d-8c5a-9521331913ac-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.436835 4730 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d6d0cf39-4835-4f5d-8c5a-9521331913ac-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.436847 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnbb5\" (UniqueName: \"kubernetes.io/projected/e8d7fc22-9a5c-4569-821d-c915ab1f5657-kube-api-access-qnbb5\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.438065 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01ab894a-0ddc-46a2-8027-96606aae9396-utilities" (OuterVolumeSpecName: "utilities") pod "01ab894a-0ddc-46a2-8027-96606aae9396" (UID: "01ab894a-0ddc-46a2-8027-96606aae9396"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.440116 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab894a-0ddc-46a2-8027-96606aae9396-kube-api-access-zjnb6" (OuterVolumeSpecName: "kube-api-access-zjnb6") pod "01ab894a-0ddc-46a2-8027-96606aae9396" (UID: "01ab894a-0ddc-46a2-8027-96606aae9396"). InnerVolumeSpecName "kube-api-access-zjnb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.445938 4730 scope.go:117] "RemoveContainer" containerID="226f342f66e47c29052b85ee0c6030ecdee8e1ed74145179cbe853c0d7b6b086" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.461566 4730 scope.go:117] "RemoveContainer" containerID="7990422ccd48ba26ecc342157e642f4a11a0e350d81cf9bb4b5a57920d725d8c" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.489975 4730 scope.go:117] "RemoveContainer" containerID="b8f1fb615808aa7970b58b617b72a2308b86231fb8a13a9fcce83526c6b6c0f2" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.491132 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24e875c6-16c4-43f2-8533-7d1af60844fb" path="/var/lib/kubelet/pods/24e875c6-16c4-43f2-8533-7d1af60844fb/volumes" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.491742 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5" path="/var/lib/kubelet/pods/d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5/volumes" Jan 31 16:35:00 crc kubenswrapper[4730]: E0131 16:35:00.492495 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8f1fb615808aa7970b58b617b72a2308b86231fb8a13a9fcce83526c6b6c0f2\": container with ID starting with b8f1fb615808aa7970b58b617b72a2308b86231fb8a13a9fcce83526c6b6c0f2 not found: ID does not exist" containerID="b8f1fb615808aa7970b58b617b72a2308b86231fb8a13a9fcce83526c6b6c0f2" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.492525 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8f1fb615808aa7970b58b617b72a2308b86231fb8a13a9fcce83526c6b6c0f2"} err="failed to get container status \"b8f1fb615808aa7970b58b617b72a2308b86231fb8a13a9fcce83526c6b6c0f2\": rpc error: code = NotFound desc = could not find container \"b8f1fb615808aa7970b58b617b72a2308b86231fb8a13a9fcce83526c6b6c0f2\": container with ID starting with b8f1fb615808aa7970b58b617b72a2308b86231fb8a13a9fcce83526c6b6c0f2 not found: ID does not exist" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.492552 4730 scope.go:117] "RemoveContainer" containerID="226f342f66e47c29052b85ee0c6030ecdee8e1ed74145179cbe853c0d7b6b086" Jan 31 16:35:00 crc kubenswrapper[4730]: E0131 16:35:00.494027 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"226f342f66e47c29052b85ee0c6030ecdee8e1ed74145179cbe853c0d7b6b086\": container with ID starting with 226f342f66e47c29052b85ee0c6030ecdee8e1ed74145179cbe853c0d7b6b086 not found: ID does not exist" containerID="226f342f66e47c29052b85ee0c6030ecdee8e1ed74145179cbe853c0d7b6b086" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.494063 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"226f342f66e47c29052b85ee0c6030ecdee8e1ed74145179cbe853c0d7b6b086"} err="failed to get container status \"226f342f66e47c29052b85ee0c6030ecdee8e1ed74145179cbe853c0d7b6b086\": rpc error: code = NotFound desc = could not find container \"226f342f66e47c29052b85ee0c6030ecdee8e1ed74145179cbe853c0d7b6b086\": container with ID starting with 226f342f66e47c29052b85ee0c6030ecdee8e1ed74145179cbe853c0d7b6b086 not found: ID does not exist" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.494078 4730 scope.go:117] "RemoveContainer" containerID="7990422ccd48ba26ecc342157e642f4a11a0e350d81cf9bb4b5a57920d725d8c" Jan 31 16:35:00 crc kubenswrapper[4730]: E0131 16:35:00.494317 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7990422ccd48ba26ecc342157e642f4a11a0e350d81cf9bb4b5a57920d725d8c\": container with ID starting with 7990422ccd48ba26ecc342157e642f4a11a0e350d81cf9bb4b5a57920d725d8c not found: ID does not exist" containerID="7990422ccd48ba26ecc342157e642f4a11a0e350d81cf9bb4b5a57920d725d8c" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.494347 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7990422ccd48ba26ecc342157e642f4a11a0e350d81cf9bb4b5a57920d725d8c"} err="failed to get container status \"7990422ccd48ba26ecc342157e642f4a11a0e350d81cf9bb4b5a57920d725d8c\": rpc error: code = NotFound desc = could not find container \"7990422ccd48ba26ecc342157e642f4a11a0e350d81cf9bb4b5a57920d725d8c\": container with ID starting with 7990422ccd48ba26ecc342157e642f4a11a0e350d81cf9bb4b5a57920d725d8c not found: ID does not exist" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.509215 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.536055 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-txbq6"] Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.538348 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01ab894a-0ddc-46a2-8027-96606aae9396-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.538374 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjnb6\" (UniqueName: \"kubernetes.io/projected/01ab894a-0ddc-46a2-8027-96606aae9396-kube-api-access-zjnb6\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.543199 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-txbq6"] Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.549495 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xwsps"] Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.551868 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xwsps"] Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.590386 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01ab894a-0ddc-46a2-8027-96606aae9396-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01ab894a-0ddc-46a2-8027-96606aae9396" (UID: "01ab894a-0ddc-46a2-8027-96606aae9396"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.604669 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 16:35:00 crc kubenswrapper[4730]: I0131 16:35:00.639387 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01ab894a-0ddc-46a2-8027-96606aae9396-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:01 crc kubenswrapper[4730]: I0131 16:35:01.258130 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7c7m8" event={"ID":"a8f25085-b681-4c8d-a35e-363253891c50","Type":"ContainerStarted","Data":"86fc4abe8fd00079a9e298a7c02332eda9728f7a5792f7a06f7473de3d90dc5d"} Jan 31 16:35:01 crc kubenswrapper[4730]: I0131 16:35:01.258171 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7c7m8" event={"ID":"a8f25085-b681-4c8d-a35e-363253891c50","Type":"ContainerStarted","Data":"6c47822619033759ae269dc79c4f570995df1fb25153df2c3d0c5b34ac13f90f"} Jan 31 16:35:01 crc kubenswrapper[4730]: I0131 16:35:01.258398 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7c7m8" Jan 31 16:35:01 crc kubenswrapper[4730]: I0131 16:35:01.260368 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f78ml" Jan 31 16:35:01 crc kubenswrapper[4730]: I0131 16:35:01.262747 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7c7m8" Jan 31 16:35:01 crc kubenswrapper[4730]: I0131 16:35:01.292660 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7c7m8" podStartSLOduration=2.292644905 podStartE2EDuration="2.292644905s" podCreationTimestamp="2026-01-31 16:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:35:01.274616309 +0000 UTC m=+288.080673245" watchObservedRunningTime="2026-01-31 16:35:01.292644905 +0000 UTC m=+288.098701821" Jan 31 16:35:01 crc kubenswrapper[4730]: I0131 16:35:01.323387 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f78ml"] Jan 31 16:35:01 crc kubenswrapper[4730]: I0131 16:35:01.334110 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f78ml"] Jan 31 16:35:01 crc kubenswrapper[4730]: I0131 16:35:01.779520 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.475545 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab894a-0ddc-46a2-8027-96606aae9396" path="/var/lib/kubelet/pods/01ab894a-0ddc-46a2-8027-96606aae9396/volumes" Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.477227 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6d0cf39-4835-4f5d-8c5a-9521331913ac" path="/var/lib/kubelet/pods/d6d0cf39-4835-4f5d-8c5a-9521331913ac/volumes" Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.478134 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d7fc22-9a5c-4569-821d-c915ab1f5657" path="/var/lib/kubelet/pods/e8d7fc22-9a5c-4569-821d-c915ab1f5657/volumes" Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.636686 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.636746 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.662473 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.662508 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.662527 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.662551 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.662570 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.662738 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.662763 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.663492 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.663529 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.672166 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.764486 4730 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.764521 4730 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.764531 4730 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.764539 4730 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:02 crc kubenswrapper[4730]: I0131 16:35:02.764549 4730 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:03 crc kubenswrapper[4730]: I0131 16:35:03.278080 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 16:35:03 crc kubenswrapper[4730]: I0131 16:35:03.278148 4730 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="7fd7c6f1b4654d3ab033682b2251c21c4991754155eb4e39923a492cb4f4ea04" exitCode=137 Jan 31 16:35:03 crc kubenswrapper[4730]: I0131 16:35:03.279274 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 16:35:03 crc kubenswrapper[4730]: I0131 16:35:03.285893 4730 scope.go:117] "RemoveContainer" containerID="7fd7c6f1b4654d3ab033682b2251c21c4991754155eb4e39923a492cb4f4ea04" Jan 31 16:35:03 crc kubenswrapper[4730]: I0131 16:35:03.302512 4730 scope.go:117] "RemoveContainer" containerID="7fd7c6f1b4654d3ab033682b2251c21c4991754155eb4e39923a492cb4f4ea04" Jan 31 16:35:03 crc kubenswrapper[4730]: E0131 16:35:03.302915 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fd7c6f1b4654d3ab033682b2251c21c4991754155eb4e39923a492cb4f4ea04\": container with ID starting with 7fd7c6f1b4654d3ab033682b2251c21c4991754155eb4e39923a492cb4f4ea04 not found: ID does not exist" containerID="7fd7c6f1b4654d3ab033682b2251c21c4991754155eb4e39923a492cb4f4ea04" Jan 31 16:35:03 crc kubenswrapper[4730]: I0131 16:35:03.303027 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fd7c6f1b4654d3ab033682b2251c21c4991754155eb4e39923a492cb4f4ea04"} err="failed to get container status \"7fd7c6f1b4654d3ab033682b2251c21c4991754155eb4e39923a492cb4f4ea04\": rpc error: code = NotFound desc = could not find container \"7fd7c6f1b4654d3ab033682b2251c21c4991754155eb4e39923a492cb4f4ea04\": container with ID starting with 7fd7c6f1b4654d3ab033682b2251c21c4991754155eb4e39923a492cb4f4ea04 not found: ID does not exist" Jan 31 16:35:04 crc kubenswrapper[4730]: I0131 16:35:04.471352 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 31 16:35:14 crc kubenswrapper[4730]: I0131 16:35:14.230178 4730 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 31 16:35:17 crc kubenswrapper[4730]: I0131 16:35:17.183043 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 16:35:17 crc kubenswrapper[4730]: I0131 16:35:17.903843 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 16:35:26 crc kubenswrapper[4730]: I0131 16:35:26.808957 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w2n4l"] Jan 31 16:35:26 crc kubenswrapper[4730]: I0131 16:35:26.810316 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" podUID="9a029edf-d8ad-4314-9296-0f6c4f707330" containerName="controller-manager" containerID="cri-o://e963c7be3147efa1683c9ca9afb5e065f2a4456787cec61bf6d9792299f131e5" gracePeriod=30 Jan 31 16:35:26 crc kubenswrapper[4730]: I0131 16:35:26.901369 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls"] Jan 31 16:35:26 crc kubenswrapper[4730]: I0131 16:35:26.901574 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" podUID="a4b96638-d5c4-43d4-ab38-15972a55d0f4" containerName="route-controller-manager" containerID="cri-o://0a5caa75043f96e14a205a902ded5152664c71cac03b35f52a64ba295b6f0bd1" gracePeriod=30 Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.124756 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.192464 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-config\") pod \"9a029edf-d8ad-4314-9296-0f6c4f707330\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.192847 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxflx\" (UniqueName: \"kubernetes.io/projected/9a029edf-d8ad-4314-9296-0f6c4f707330-kube-api-access-rxflx\") pod \"9a029edf-d8ad-4314-9296-0f6c4f707330\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.192928 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a029edf-d8ad-4314-9296-0f6c4f707330-serving-cert\") pod \"9a029edf-d8ad-4314-9296-0f6c4f707330\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.193110 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-proxy-ca-bundles\") pod \"9a029edf-d8ad-4314-9296-0f6c4f707330\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.193444 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-client-ca\") pod \"9a029edf-d8ad-4314-9296-0f6c4f707330\" (UID: \"9a029edf-d8ad-4314-9296-0f6c4f707330\") " Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.194732 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9a029edf-d8ad-4314-9296-0f6c4f707330" (UID: "9a029edf-d8ad-4314-9296-0f6c4f707330"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.195066 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-config" (OuterVolumeSpecName: "config") pod "9a029edf-d8ad-4314-9296-0f6c4f707330" (UID: "9a029edf-d8ad-4314-9296-0f6c4f707330"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.195421 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-client-ca" (OuterVolumeSpecName: "client-ca") pod "9a029edf-d8ad-4314-9296-0f6c4f707330" (UID: "9a029edf-d8ad-4314-9296-0f6c4f707330"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.198707 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a029edf-d8ad-4314-9296-0f6c4f707330-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9a029edf-d8ad-4314-9296-0f6c4f707330" (UID: "9a029edf-d8ad-4314-9296-0f6c4f707330"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.203165 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a029edf-d8ad-4314-9296-0f6c4f707330-kube-api-access-rxflx" (OuterVolumeSpecName: "kube-api-access-rxflx") pod "9a029edf-d8ad-4314-9296-0f6c4f707330" (UID: "9a029edf-d8ad-4314-9296-0f6c4f707330"). InnerVolumeSpecName "kube-api-access-rxflx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.239749 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.294990 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x75gm\" (UniqueName: \"kubernetes.io/projected/a4b96638-d5c4-43d4-ab38-15972a55d0f4-kube-api-access-x75gm\") pod \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\" (UID: \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\") " Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.295032 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b96638-d5c4-43d4-ab38-15972a55d0f4-config\") pod \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\" (UID: \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\") " Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.295068 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4b96638-d5c4-43d4-ab38-15972a55d0f4-client-ca\") pod \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\" (UID: \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\") " Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.295092 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b96638-d5c4-43d4-ab38-15972a55d0f4-serving-cert\") pod \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\" (UID: \"a4b96638-d5c4-43d4-ab38-15972a55d0f4\") " Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.295289 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.295301 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.295329 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a029edf-d8ad-4314-9296-0f6c4f707330-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.295339 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxflx\" (UniqueName: \"kubernetes.io/projected/9a029edf-d8ad-4314-9296-0f6c4f707330-kube-api-access-rxflx\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.295348 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a029edf-d8ad-4314-9296-0f6c4f707330-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.296290 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b96638-d5c4-43d4-ab38-15972a55d0f4-config" (OuterVolumeSpecName: "config") pod "a4b96638-d5c4-43d4-ab38-15972a55d0f4" (UID: "a4b96638-d5c4-43d4-ab38-15972a55d0f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.296861 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b96638-d5c4-43d4-ab38-15972a55d0f4-client-ca" (OuterVolumeSpecName: "client-ca") pod "a4b96638-d5c4-43d4-ab38-15972a55d0f4" (UID: "a4b96638-d5c4-43d4-ab38-15972a55d0f4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.302629 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b96638-d5c4-43d4-ab38-15972a55d0f4-kube-api-access-x75gm" (OuterVolumeSpecName: "kube-api-access-x75gm") pod "a4b96638-d5c4-43d4-ab38-15972a55d0f4" (UID: "a4b96638-d5c4-43d4-ab38-15972a55d0f4"). InnerVolumeSpecName "kube-api-access-x75gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.302791 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4b96638-d5c4-43d4-ab38-15972a55d0f4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a4b96638-d5c4-43d4-ab38-15972a55d0f4" (UID: "a4b96638-d5c4-43d4-ab38-15972a55d0f4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.347460 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c8d7bdf95-778fl"] Jan 31 16:35:27 crc kubenswrapper[4730]: E0131 16:35:27.347899 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5" containerName="extract-content" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.347997 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5" containerName="extract-content" Jan 31 16:35:27 crc kubenswrapper[4730]: E0131 16:35:27.348061 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b96638-d5c4-43d4-ab38-15972a55d0f4" containerName="route-controller-manager" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.348137 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b96638-d5c4-43d4-ab38-15972a55d0f4" containerName="route-controller-manager" Jan 31 16:35:27 crc kubenswrapper[4730]: E0131 16:35:27.348201 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ab894a-0ddc-46a2-8027-96606aae9396" containerName="extract-content" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.348253 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ab894a-0ddc-46a2-8027-96606aae9396" containerName="extract-content" Jan 31 16:35:27 crc kubenswrapper[4730]: E0131 16:35:27.348311 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d7fc22-9a5c-4569-821d-c915ab1f5657" containerName="extract-utilities" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.348370 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d7fc22-9a5c-4569-821d-c915ab1f5657" containerName="extract-utilities" Jan 31 16:35:27 crc kubenswrapper[4730]: E0131 16:35:27.348423 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ab894a-0ddc-46a2-8027-96606aae9396" containerName="registry-server" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.348477 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ab894a-0ddc-46a2-8027-96606aae9396" containerName="registry-server" Jan 31 16:35:27 crc kubenswrapper[4730]: E0131 16:35:27.348538 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ab894a-0ddc-46a2-8027-96606aae9396" containerName="extract-utilities" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.348590 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ab894a-0ddc-46a2-8027-96606aae9396" containerName="extract-utilities" Jan 31 16:35:27 crc kubenswrapper[4730]: E0131 16:35:27.348647 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e875c6-16c4-43f2-8533-7d1af60844fb" containerName="extract-utilities" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.348904 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e875c6-16c4-43f2-8533-7d1af60844fb" containerName="extract-utilities" Jan 31 16:35:27 crc kubenswrapper[4730]: E0131 16:35:27.348960 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6d0cf39-4835-4f5d-8c5a-9521331913ac" containerName="marketplace-operator" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.349011 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6d0cf39-4835-4f5d-8c5a-9521331913ac" containerName="marketplace-operator" Jan 31 16:35:27 crc kubenswrapper[4730]: E0131 16:35:27.349074 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d7fc22-9a5c-4569-821d-c915ab1f5657" containerName="registry-server" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.349129 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d7fc22-9a5c-4569-821d-c915ab1f5657" containerName="registry-server" Jan 31 16:35:27 crc kubenswrapper[4730]: E0131 16:35:27.349181 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d7fc22-9a5c-4569-821d-c915ab1f5657" containerName="extract-content" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.349230 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d7fc22-9a5c-4569-821d-c915ab1f5657" containerName="extract-content" Jan 31 16:35:27 crc kubenswrapper[4730]: E0131 16:35:27.349282 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a029edf-d8ad-4314-9296-0f6c4f707330" containerName="controller-manager" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.349333 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a029edf-d8ad-4314-9296-0f6c4f707330" containerName="controller-manager" Jan 31 16:35:27 crc kubenswrapper[4730]: E0131 16:35:27.349393 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5" containerName="extract-utilities" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.349444 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5" containerName="extract-utilities" Jan 31 16:35:27 crc kubenswrapper[4730]: E0131 16:35:27.349506 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e875c6-16c4-43f2-8533-7d1af60844fb" containerName="registry-server" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.349556 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e875c6-16c4-43f2-8533-7d1af60844fb" containerName="registry-server" Jan 31 16:35:27 crc kubenswrapper[4730]: E0131 16:35:27.349613 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e875c6-16c4-43f2-8533-7d1af60844fb" containerName="extract-content" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.349666 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e875c6-16c4-43f2-8533-7d1af60844fb" containerName="extract-content" Jan 31 16:35:27 crc kubenswrapper[4730]: E0131 16:35:27.349722 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5" containerName="registry-server" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.349777 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5" containerName="registry-server" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.349932 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d7fc22-9a5c-4569-821d-c915ab1f5657" containerName="registry-server" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.350000 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f10560-f2e2-4a06-bc6b-fe37f7a4e0d5" containerName="registry-server" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.350055 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4b96638-d5c4-43d4-ab38-15972a55d0f4" containerName="route-controller-manager" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.350114 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a029edf-d8ad-4314-9296-0f6c4f707330" containerName="controller-manager" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.350169 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6d0cf39-4835-4f5d-8c5a-9521331913ac" containerName="marketplace-operator" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.350246 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e875c6-16c4-43f2-8533-7d1af60844fb" containerName="registry-server" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.350304 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="01ab894a-0ddc-46a2-8027-96606aae9396" containerName="registry-server" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.350665 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.368283 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c8d7bdf95-778fl"] Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.394902 4730 generic.go:334] "Generic (PLEG): container finished" podID="9a029edf-d8ad-4314-9296-0f6c4f707330" containerID="e963c7be3147efa1683c9ca9afb5e065f2a4456787cec61bf6d9792299f131e5" exitCode=0 Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.394986 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.394996 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" event={"ID":"9a029edf-d8ad-4314-9296-0f6c4f707330","Type":"ContainerDied","Data":"e963c7be3147efa1683c9ca9afb5e065f2a4456787cec61bf6d9792299f131e5"} Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.395038 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-w2n4l" event={"ID":"9a029edf-d8ad-4314-9296-0f6c4f707330","Type":"ContainerDied","Data":"cc56148f748c708e85e58803d1853d289e77c3d11a7271a7683324ee79749c40"} Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.395057 4730 scope.go:117] "RemoveContainer" containerID="e963c7be3147efa1683c9ca9afb5e065f2a4456787cec61bf6d9792299f131e5" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.396844 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-proxy-ca-bundles\") pod \"controller-manager-6c8d7bdf95-778fl\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.396998 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-client-ca\") pod \"controller-manager-6c8d7bdf95-778fl\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.397191 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-config\") pod \"controller-manager-6c8d7bdf95-778fl\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.397295 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.397206 4730 generic.go:334] "Generic (PLEG): container finished" podID="a4b96638-d5c4-43d4-ab38-15972a55d0f4" containerID="0a5caa75043f96e14a205a902ded5152664c71cac03b35f52a64ba295b6f0bd1" exitCode=0 Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.397302 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rtqz\" (UniqueName: \"kubernetes.io/projected/02453824-4001-4e3c-8b5c-66c324efb475-kube-api-access-2rtqz\") pod \"controller-manager-6c8d7bdf95-778fl\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.397235 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" event={"ID":"a4b96638-d5c4-43d4-ab38-15972a55d0f4","Type":"ContainerDied","Data":"0a5caa75043f96e14a205a902ded5152664c71cac03b35f52a64ba295b6f0bd1"} Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.397675 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls" event={"ID":"a4b96638-d5c4-43d4-ab38-15972a55d0f4","Type":"ContainerDied","Data":"6a252051842af1e2932c913da5622a6d4237c6bc6c7acc41f823d6016a3c4266"} Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.397861 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02453824-4001-4e3c-8b5c-66c324efb475-serving-cert\") pod \"controller-manager-6c8d7bdf95-778fl\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.397982 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b96638-d5c4-43d4-ab38-15972a55d0f4-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.398094 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4b96638-d5c4-43d4-ab38-15972a55d0f4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.398176 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b96638-d5c4-43d4-ab38-15972a55d0f4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.398261 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x75gm\" (UniqueName: \"kubernetes.io/projected/a4b96638-d5c4-43d4-ab38-15972a55d0f4-kube-api-access-x75gm\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.410135 4730 scope.go:117] "RemoveContainer" containerID="e963c7be3147efa1683c9ca9afb5e065f2a4456787cec61bf6d9792299f131e5" Jan 31 16:35:27 crc kubenswrapper[4730]: E0131 16:35:27.410650 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e963c7be3147efa1683c9ca9afb5e065f2a4456787cec61bf6d9792299f131e5\": container with ID starting with e963c7be3147efa1683c9ca9afb5e065f2a4456787cec61bf6d9792299f131e5 not found: ID does not exist" containerID="e963c7be3147efa1683c9ca9afb5e065f2a4456787cec61bf6d9792299f131e5" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.410744 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e963c7be3147efa1683c9ca9afb5e065f2a4456787cec61bf6d9792299f131e5"} err="failed to get container status \"e963c7be3147efa1683c9ca9afb5e065f2a4456787cec61bf6d9792299f131e5\": rpc error: code = NotFound desc = could not find container \"e963c7be3147efa1683c9ca9afb5e065f2a4456787cec61bf6d9792299f131e5\": container with ID starting with e963c7be3147efa1683c9ca9afb5e065f2a4456787cec61bf6d9792299f131e5 not found: ID does not exist" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.410986 4730 scope.go:117] "RemoveContainer" containerID="0a5caa75043f96e14a205a902ded5152664c71cac03b35f52a64ba295b6f0bd1" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.433376 4730 scope.go:117] "RemoveContainer" containerID="0a5caa75043f96e14a205a902ded5152664c71cac03b35f52a64ba295b6f0bd1" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.434700 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w2n4l"] Jan 31 16:35:27 crc kubenswrapper[4730]: E0131 16:35:27.439888 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a5caa75043f96e14a205a902ded5152664c71cac03b35f52a64ba295b6f0bd1\": container with ID starting with 0a5caa75043f96e14a205a902ded5152664c71cac03b35f52a64ba295b6f0bd1 not found: ID does not exist" containerID="0a5caa75043f96e14a205a902ded5152664c71cac03b35f52a64ba295b6f0bd1" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.439933 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5caa75043f96e14a205a902ded5152664c71cac03b35f52a64ba295b6f0bd1"} err="failed to get container status \"0a5caa75043f96e14a205a902ded5152664c71cac03b35f52a64ba295b6f0bd1\": rpc error: code = NotFound desc = could not find container \"0a5caa75043f96e14a205a902ded5152664c71cac03b35f52a64ba295b6f0bd1\": container with ID starting with 0a5caa75043f96e14a205a902ded5152664c71cac03b35f52a64ba295b6f0bd1 not found: ID does not exist" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.440507 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w2n4l"] Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.447303 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k"] Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.448063 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.450937 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.451091 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.451359 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.451458 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls"] Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.453976 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.454151 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.454258 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.459219 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k"] Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.467713 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ml2ls"] Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.499556 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rtqz\" (UniqueName: \"kubernetes.io/projected/02453824-4001-4e3c-8b5c-66c324efb475-kube-api-access-2rtqz\") pod \"controller-manager-6c8d7bdf95-778fl\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.499603 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02453824-4001-4e3c-8b5c-66c324efb475-serving-cert\") pod \"controller-manager-6c8d7bdf95-778fl\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.499632 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-proxy-ca-bundles\") pod \"controller-manager-6c8d7bdf95-778fl\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.499668 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz67v\" (UniqueName: \"kubernetes.io/projected/333d46d3-8117-42d9-adb1-84bcaf1bf083-kube-api-access-tz67v\") pod \"route-controller-manager-59d67c4f7-67w5k\" (UID: \"333d46d3-8117-42d9-adb1-84bcaf1bf083\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.499694 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333d46d3-8117-42d9-adb1-84bcaf1bf083-config\") pod \"route-controller-manager-59d67c4f7-67w5k\" (UID: \"333d46d3-8117-42d9-adb1-84bcaf1bf083\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.499713 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-client-ca\") pod \"controller-manager-6c8d7bdf95-778fl\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.499735 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/333d46d3-8117-42d9-adb1-84bcaf1bf083-serving-cert\") pod \"route-controller-manager-59d67c4f7-67w5k\" (UID: \"333d46d3-8117-42d9-adb1-84bcaf1bf083\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.499751 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/333d46d3-8117-42d9-adb1-84bcaf1bf083-client-ca\") pod \"route-controller-manager-59d67c4f7-67w5k\" (UID: \"333d46d3-8117-42d9-adb1-84bcaf1bf083\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.499779 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-config\") pod \"controller-manager-6c8d7bdf95-778fl\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.500947 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-client-ca\") pod \"controller-manager-6c8d7bdf95-778fl\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.501003 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-config\") pod \"controller-manager-6c8d7bdf95-778fl\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.501087 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-proxy-ca-bundles\") pod \"controller-manager-6c8d7bdf95-778fl\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.502744 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02453824-4001-4e3c-8b5c-66c324efb475-serving-cert\") pod \"controller-manager-6c8d7bdf95-778fl\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.514263 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rtqz\" (UniqueName: \"kubernetes.io/projected/02453824-4001-4e3c-8b5c-66c324efb475-kube-api-access-2rtqz\") pod \"controller-manager-6c8d7bdf95-778fl\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.600429 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz67v\" (UniqueName: \"kubernetes.io/projected/333d46d3-8117-42d9-adb1-84bcaf1bf083-kube-api-access-tz67v\") pod \"route-controller-manager-59d67c4f7-67w5k\" (UID: \"333d46d3-8117-42d9-adb1-84bcaf1bf083\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.600499 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333d46d3-8117-42d9-adb1-84bcaf1bf083-config\") pod \"route-controller-manager-59d67c4f7-67w5k\" (UID: \"333d46d3-8117-42d9-adb1-84bcaf1bf083\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.600532 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/333d46d3-8117-42d9-adb1-84bcaf1bf083-serving-cert\") pod \"route-controller-manager-59d67c4f7-67w5k\" (UID: \"333d46d3-8117-42d9-adb1-84bcaf1bf083\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.600549 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/333d46d3-8117-42d9-adb1-84bcaf1bf083-client-ca\") pod \"route-controller-manager-59d67c4f7-67w5k\" (UID: \"333d46d3-8117-42d9-adb1-84bcaf1bf083\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.601447 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/333d46d3-8117-42d9-adb1-84bcaf1bf083-client-ca\") pod \"route-controller-manager-59d67c4f7-67w5k\" (UID: \"333d46d3-8117-42d9-adb1-84bcaf1bf083\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.601591 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333d46d3-8117-42d9-adb1-84bcaf1bf083-config\") pod \"route-controller-manager-59d67c4f7-67w5k\" (UID: \"333d46d3-8117-42d9-adb1-84bcaf1bf083\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.603908 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/333d46d3-8117-42d9-adb1-84bcaf1bf083-serving-cert\") pod \"route-controller-manager-59d67c4f7-67w5k\" (UID: \"333d46d3-8117-42d9-adb1-84bcaf1bf083\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.615229 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz67v\" (UniqueName: \"kubernetes.io/projected/333d46d3-8117-42d9-adb1-84bcaf1bf083-kube-api-access-tz67v\") pod \"route-controller-manager-59d67c4f7-67w5k\" (UID: \"333d46d3-8117-42d9-adb1-84bcaf1bf083\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.662091 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:27 crc kubenswrapper[4730]: I0131 16:35:27.766462 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.026910 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k"] Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.030002 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c8d7bdf95-778fl"] Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.116029 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c8d7bdf95-778fl"] Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.137935 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k"] Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.403833 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" event={"ID":"333d46d3-8117-42d9-adb1-84bcaf1bf083","Type":"ContainerStarted","Data":"b74174ac444a1e6a5cc5c59db6f1513736a4ed26340f1dda30f13ddca93593a1"} Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.403868 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" event={"ID":"333d46d3-8117-42d9-adb1-84bcaf1bf083","Type":"ContainerStarted","Data":"45b011fe3aab2ce54639cd57b5856151331fe5d3f1c63bfc715c0ff1b325bdfc"} Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.403960 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" podUID="333d46d3-8117-42d9-adb1-84bcaf1bf083" containerName="route-controller-manager" containerID="cri-o://b74174ac444a1e6a5cc5c59db6f1513736a4ed26340f1dda30f13ddca93593a1" gracePeriod=30 Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.404757 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.410292 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" event={"ID":"02453824-4001-4e3c-8b5c-66c324efb475","Type":"ContainerStarted","Data":"b385b4c91245df3e48640251b6c3f8594fba5f7798c1ea046267b0b13108fe7b"} Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.410417 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" event={"ID":"02453824-4001-4e3c-8b5c-66c324efb475","Type":"ContainerStarted","Data":"46a1e0f1cc9b062dd903264c349fd7bfa8e4dda19557690b4c67097d0f719c9b"} Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.410494 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.410383 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" podUID="02453824-4001-4e3c-8b5c-66c324efb475" containerName="controller-manager" containerID="cri-o://b385b4c91245df3e48640251b6c3f8594fba5f7798c1ea046267b0b13108fe7b" gracePeriod=30 Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.417158 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.445890 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" podStartSLOduration=1.445875162 podStartE2EDuration="1.445875162s" podCreationTimestamp="2026-01-31 16:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:35:28.426852717 +0000 UTC m=+315.232909633" watchObservedRunningTime="2026-01-31 16:35:28.445875162 +0000 UTC m=+315.251932078" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.447053 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" podStartSLOduration=1.447049807 podStartE2EDuration="1.447049807s" podCreationTimestamp="2026-01-31 16:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:35:28.445742128 +0000 UTC m=+315.251799044" watchObservedRunningTime="2026-01-31 16:35:28.447049807 +0000 UTC m=+315.253106723" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.507564 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a029edf-d8ad-4314-9296-0f6c4f707330" path="/var/lib/kubelet/pods/9a029edf-d8ad-4314-9296-0f6c4f707330/volumes" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.508234 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4b96638-d5c4-43d4-ab38-15972a55d0f4" path="/var/lib/kubelet/pods/a4b96638-d5c4-43d4-ab38-15972a55d0f4/volumes" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.613274 4730 patch_prober.go:28] interesting pod/route-controller-manager-59d67c4f7-67w5k container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": read tcp 10.217.0.2:43836->10.217.0.60:8443: read: connection reset by peer" start-of-body= Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.613360 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" podUID="333d46d3-8117-42d9-adb1-84bcaf1bf083" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": read tcp 10.217.0.2:43836->10.217.0.60:8443: read: connection reset by peer" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.689669 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.714589 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rtqz\" (UniqueName: \"kubernetes.io/projected/02453824-4001-4e3c-8b5c-66c324efb475-kube-api-access-2rtqz\") pod \"02453824-4001-4e3c-8b5c-66c324efb475\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.714694 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02453824-4001-4e3c-8b5c-66c324efb475-serving-cert\") pod \"02453824-4001-4e3c-8b5c-66c324efb475\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.714719 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-proxy-ca-bundles\") pod \"02453824-4001-4e3c-8b5c-66c324efb475\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.714786 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-config\") pod \"02453824-4001-4e3c-8b5c-66c324efb475\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.714817 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-client-ca\") pod \"02453824-4001-4e3c-8b5c-66c324efb475\" (UID: \"02453824-4001-4e3c-8b5c-66c324efb475\") " Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.715706 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "02453824-4001-4e3c-8b5c-66c324efb475" (UID: "02453824-4001-4e3c-8b5c-66c324efb475"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.715739 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-config" (OuterVolumeSpecName: "config") pod "02453824-4001-4e3c-8b5c-66c324efb475" (UID: "02453824-4001-4e3c-8b5c-66c324efb475"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.719466 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02453824-4001-4e3c-8b5c-66c324efb475-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "02453824-4001-4e3c-8b5c-66c324efb475" (UID: "02453824-4001-4e3c-8b5c-66c324efb475"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.720238 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-client-ca" (OuterVolumeSpecName: "client-ca") pod "02453824-4001-4e3c-8b5c-66c324efb475" (UID: "02453824-4001-4e3c-8b5c-66c324efb475"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.722666 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02453824-4001-4e3c-8b5c-66c324efb475-kube-api-access-2rtqz" (OuterVolumeSpecName: "kube-api-access-2rtqz") pod "02453824-4001-4e3c-8b5c-66c324efb475" (UID: "02453824-4001-4e3c-8b5c-66c324efb475"). InnerVolumeSpecName "kube-api-access-2rtqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.816472 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.816684 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.816706 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rtqz\" (UniqueName: \"kubernetes.io/projected/02453824-4001-4e3c-8b5c-66c324efb475-kube-api-access-2rtqz\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.816731 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02453824-4001-4e3c-8b5c-66c324efb475-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.816742 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02453824-4001-4e3c-8b5c-66c324efb475-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:28 crc kubenswrapper[4730]: I0131 16:35:28.936263 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.059891 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.395175 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-59d67c4f7-67w5k_333d46d3-8117-42d9-adb1-84bcaf1bf083/route-controller-manager/0.log" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.395257 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.417442 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-59d67c4f7-67w5k_333d46d3-8117-42d9-adb1-84bcaf1bf083/route-controller-manager/0.log" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.417495 4730 generic.go:334] "Generic (PLEG): container finished" podID="333d46d3-8117-42d9-adb1-84bcaf1bf083" containerID="b74174ac444a1e6a5cc5c59db6f1513736a4ed26340f1dda30f13ddca93593a1" exitCode=255 Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.417555 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" event={"ID":"333d46d3-8117-42d9-adb1-84bcaf1bf083","Type":"ContainerDied","Data":"b74174ac444a1e6a5cc5c59db6f1513736a4ed26340f1dda30f13ddca93593a1"} Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.417596 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" event={"ID":"333d46d3-8117-42d9-adb1-84bcaf1bf083","Type":"ContainerDied","Data":"45b011fe3aab2ce54639cd57b5856151331fe5d3f1c63bfc715c0ff1b325bdfc"} Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.417619 4730 scope.go:117] "RemoveContainer" containerID="b74174ac444a1e6a5cc5c59db6f1513736a4ed26340f1dda30f13ddca93593a1" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.417743 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.420161 4730 generic.go:334] "Generic (PLEG): container finished" podID="02453824-4001-4e3c-8b5c-66c324efb475" containerID="b385b4c91245df3e48640251b6c3f8594fba5f7798c1ea046267b0b13108fe7b" exitCode=0 Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.420190 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" event={"ID":"02453824-4001-4e3c-8b5c-66c324efb475","Type":"ContainerDied","Data":"b385b4c91245df3e48640251b6c3f8594fba5f7798c1ea046267b0b13108fe7b"} Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.420212 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" event={"ID":"02453824-4001-4e3c-8b5c-66c324efb475","Type":"ContainerDied","Data":"46a1e0f1cc9b062dd903264c349fd7bfa8e4dda19557690b4c67097d0f719c9b"} Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.420251 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-778fl" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.423235 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/333d46d3-8117-42d9-adb1-84bcaf1bf083-client-ca\") pod \"333d46d3-8117-42d9-adb1-84bcaf1bf083\" (UID: \"333d46d3-8117-42d9-adb1-84bcaf1bf083\") " Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.423275 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/333d46d3-8117-42d9-adb1-84bcaf1bf083-serving-cert\") pod \"333d46d3-8117-42d9-adb1-84bcaf1bf083\" (UID: \"333d46d3-8117-42d9-adb1-84bcaf1bf083\") " Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.423323 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz67v\" (UniqueName: \"kubernetes.io/projected/333d46d3-8117-42d9-adb1-84bcaf1bf083-kube-api-access-tz67v\") pod \"333d46d3-8117-42d9-adb1-84bcaf1bf083\" (UID: \"333d46d3-8117-42d9-adb1-84bcaf1bf083\") " Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.423381 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333d46d3-8117-42d9-adb1-84bcaf1bf083-config\") pod \"333d46d3-8117-42d9-adb1-84bcaf1bf083\" (UID: \"333d46d3-8117-42d9-adb1-84bcaf1bf083\") " Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.424243 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/333d46d3-8117-42d9-adb1-84bcaf1bf083-config" (OuterVolumeSpecName: "config") pod "333d46d3-8117-42d9-adb1-84bcaf1bf083" (UID: "333d46d3-8117-42d9-adb1-84bcaf1bf083"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.424736 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/333d46d3-8117-42d9-adb1-84bcaf1bf083-client-ca" (OuterVolumeSpecName: "client-ca") pod "333d46d3-8117-42d9-adb1-84bcaf1bf083" (UID: "333d46d3-8117-42d9-adb1-84bcaf1bf083"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.436157 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/333d46d3-8117-42d9-adb1-84bcaf1bf083-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "333d46d3-8117-42d9-adb1-84bcaf1bf083" (UID: "333d46d3-8117-42d9-adb1-84bcaf1bf083"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.436768 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/333d46d3-8117-42d9-adb1-84bcaf1bf083-kube-api-access-tz67v" (OuterVolumeSpecName: "kube-api-access-tz67v") pod "333d46d3-8117-42d9-adb1-84bcaf1bf083" (UID: "333d46d3-8117-42d9-adb1-84bcaf1bf083"). InnerVolumeSpecName "kube-api-access-tz67v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.440690 4730 scope.go:117] "RemoveContainer" containerID="b74174ac444a1e6a5cc5c59db6f1513736a4ed26340f1dda30f13ddca93593a1" Jan 31 16:35:29 crc kubenswrapper[4730]: E0131 16:35:29.441182 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b74174ac444a1e6a5cc5c59db6f1513736a4ed26340f1dda30f13ddca93593a1\": container with ID starting with b74174ac444a1e6a5cc5c59db6f1513736a4ed26340f1dda30f13ddca93593a1 not found: ID does not exist" containerID="b74174ac444a1e6a5cc5c59db6f1513736a4ed26340f1dda30f13ddca93593a1" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.441236 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b74174ac444a1e6a5cc5c59db6f1513736a4ed26340f1dda30f13ddca93593a1"} err="failed to get container status \"b74174ac444a1e6a5cc5c59db6f1513736a4ed26340f1dda30f13ddca93593a1\": rpc error: code = NotFound desc = could not find container \"b74174ac444a1e6a5cc5c59db6f1513736a4ed26340f1dda30f13ddca93593a1\": container with ID starting with b74174ac444a1e6a5cc5c59db6f1513736a4ed26340f1dda30f13ddca93593a1 not found: ID does not exist" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.441268 4730 scope.go:117] "RemoveContainer" containerID="b385b4c91245df3e48640251b6c3f8594fba5f7798c1ea046267b0b13108fe7b" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.464707 4730 scope.go:117] "RemoveContainer" containerID="b385b4c91245df3e48640251b6c3f8594fba5f7798c1ea046267b0b13108fe7b" Jan 31 16:35:29 crc kubenswrapper[4730]: E0131 16:35:29.465182 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b385b4c91245df3e48640251b6c3f8594fba5f7798c1ea046267b0b13108fe7b\": container with ID starting with b385b4c91245df3e48640251b6c3f8594fba5f7798c1ea046267b0b13108fe7b not found: ID does not exist" containerID="b385b4c91245df3e48640251b6c3f8594fba5f7798c1ea046267b0b13108fe7b" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.465344 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b385b4c91245df3e48640251b6c3f8594fba5f7798c1ea046267b0b13108fe7b"} err="failed to get container status \"b385b4c91245df3e48640251b6c3f8594fba5f7798c1ea046267b0b13108fe7b\": rpc error: code = NotFound desc = could not find container \"b385b4c91245df3e48640251b6c3f8594fba5f7798c1ea046267b0b13108fe7b\": container with ID starting with b385b4c91245df3e48640251b6c3f8594fba5f7798c1ea046267b0b13108fe7b not found: ID does not exist" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.479716 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c8d7bdf95-778fl"] Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.483579 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6c8d7bdf95-778fl"] Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.524187 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/333d46d3-8117-42d9-adb1-84bcaf1bf083-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.524299 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/333d46d3-8117-42d9-adb1-84bcaf1bf083-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.524340 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz67v\" (UniqueName: \"kubernetes.io/projected/333d46d3-8117-42d9-adb1-84bcaf1bf083-kube-api-access-tz67v\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.524356 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333d46d3-8117-42d9-adb1-84bcaf1bf083-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.756964 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k"] Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.758673 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d67c4f7-67w5k"] Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.796307 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-789bbbcf9f-mttdq"] Jan 31 16:35:29 crc kubenswrapper[4730]: E0131 16:35:29.796821 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02453824-4001-4e3c-8b5c-66c324efb475" containerName="controller-manager" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.796890 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="02453824-4001-4e3c-8b5c-66c324efb475" containerName="controller-manager" Jan 31 16:35:29 crc kubenswrapper[4730]: E0131 16:35:29.796953 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="333d46d3-8117-42d9-adb1-84bcaf1bf083" containerName="route-controller-manager" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.797022 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="333d46d3-8117-42d9-adb1-84bcaf1bf083" containerName="route-controller-manager" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.797159 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="333d46d3-8117-42d9-adb1-84bcaf1bf083" containerName="route-controller-manager" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.797218 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="02453824-4001-4e3c-8b5c-66c324efb475" containerName="controller-manager" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.797653 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.808196 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq"] Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.808715 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.810230 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.810456 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.810995 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.813885 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.814052 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.814842 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.815298 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.815481 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.821301 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq"] Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.822318 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.822851 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.822952 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.824628 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-789bbbcf9f-mttdq"] Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.827323 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzxjk\" (UniqueName: \"kubernetes.io/projected/94f6ce60-b51f-46c6-8e44-3c1196a834de-kube-api-access-vzxjk\") pod \"route-controller-manager-5764494d47-bqfzq\" (UID: \"94f6ce60-b51f-46c6-8e44-3c1196a834de\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.827383 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-client-ca\") pod \"controller-manager-789bbbcf9f-mttdq\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.827429 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5cfg\" (UniqueName: \"kubernetes.io/projected/f5175ea8-7d42-4d16-9217-a751672fde50-kube-api-access-s5cfg\") pod \"controller-manager-789bbbcf9f-mttdq\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.827505 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94f6ce60-b51f-46c6-8e44-3c1196a834de-serving-cert\") pod \"route-controller-manager-5764494d47-bqfzq\" (UID: \"94f6ce60-b51f-46c6-8e44-3c1196a834de\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.827555 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94f6ce60-b51f-46c6-8e44-3c1196a834de-client-ca\") pod \"route-controller-manager-5764494d47-bqfzq\" (UID: \"94f6ce60-b51f-46c6-8e44-3c1196a834de\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.827591 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-proxy-ca-bundles\") pod \"controller-manager-789bbbcf9f-mttdq\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.827620 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-config\") pod \"controller-manager-789bbbcf9f-mttdq\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.827649 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5175ea8-7d42-4d16-9217-a751672fde50-serving-cert\") pod \"controller-manager-789bbbcf9f-mttdq\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.827687 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94f6ce60-b51f-46c6-8e44-3c1196a834de-config\") pod \"route-controller-manager-5764494d47-bqfzq\" (UID: \"94f6ce60-b51f-46c6-8e44-3c1196a834de\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.828201 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.835344 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.931653 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzxjk\" (UniqueName: \"kubernetes.io/projected/94f6ce60-b51f-46c6-8e44-3c1196a834de-kube-api-access-vzxjk\") pod \"route-controller-manager-5764494d47-bqfzq\" (UID: \"94f6ce60-b51f-46c6-8e44-3c1196a834de\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.931945 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-client-ca\") pod \"controller-manager-789bbbcf9f-mttdq\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.931981 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5cfg\" (UniqueName: \"kubernetes.io/projected/f5175ea8-7d42-4d16-9217-a751672fde50-kube-api-access-s5cfg\") pod \"controller-manager-789bbbcf9f-mttdq\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.932013 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94f6ce60-b51f-46c6-8e44-3c1196a834de-serving-cert\") pod \"route-controller-manager-5764494d47-bqfzq\" (UID: \"94f6ce60-b51f-46c6-8e44-3c1196a834de\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.932048 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94f6ce60-b51f-46c6-8e44-3c1196a834de-client-ca\") pod \"route-controller-manager-5764494d47-bqfzq\" (UID: \"94f6ce60-b51f-46c6-8e44-3c1196a834de\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.932067 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-proxy-ca-bundles\") pod \"controller-manager-789bbbcf9f-mttdq\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.932086 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-config\") pod \"controller-manager-789bbbcf9f-mttdq\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.932100 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5175ea8-7d42-4d16-9217-a751672fde50-serving-cert\") pod \"controller-manager-789bbbcf9f-mttdq\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.932119 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94f6ce60-b51f-46c6-8e44-3c1196a834de-config\") pod \"route-controller-manager-5764494d47-bqfzq\" (UID: \"94f6ce60-b51f-46c6-8e44-3c1196a834de\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.932845 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-client-ca\") pod \"controller-manager-789bbbcf9f-mttdq\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.933198 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-proxy-ca-bundles\") pod \"controller-manager-789bbbcf9f-mttdq\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.933878 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-config\") pod \"controller-manager-789bbbcf9f-mttdq\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.933947 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94f6ce60-b51f-46c6-8e44-3c1196a834de-config\") pod \"route-controller-manager-5764494d47-bqfzq\" (UID: \"94f6ce60-b51f-46c6-8e44-3c1196a834de\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.934017 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94f6ce60-b51f-46c6-8e44-3c1196a834de-client-ca\") pod \"route-controller-manager-5764494d47-bqfzq\" (UID: \"94f6ce60-b51f-46c6-8e44-3c1196a834de\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.937965 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5175ea8-7d42-4d16-9217-a751672fde50-serving-cert\") pod \"controller-manager-789bbbcf9f-mttdq\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.954367 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94f6ce60-b51f-46c6-8e44-3c1196a834de-serving-cert\") pod \"route-controller-manager-5764494d47-bqfzq\" (UID: \"94f6ce60-b51f-46c6-8e44-3c1196a834de\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.955529 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzxjk\" (UniqueName: \"kubernetes.io/projected/94f6ce60-b51f-46c6-8e44-3c1196a834de-kube-api-access-vzxjk\") pod \"route-controller-manager-5764494d47-bqfzq\" (UID: \"94f6ce60-b51f-46c6-8e44-3c1196a834de\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:29 crc kubenswrapper[4730]: I0131 16:35:29.956464 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5cfg\" (UniqueName: \"kubernetes.io/projected/f5175ea8-7d42-4d16-9217-a751672fde50-kube-api-access-s5cfg\") pod \"controller-manager-789bbbcf9f-mttdq\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:30 crc kubenswrapper[4730]: I0131 16:35:30.169478 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:30 crc kubenswrapper[4730]: I0131 16:35:30.183205 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:30 crc kubenswrapper[4730]: I0131 16:35:30.378575 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq"] Jan 31 16:35:30 crc kubenswrapper[4730]: W0131 16:35:30.390148 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94f6ce60_b51f_46c6_8e44_3c1196a834de.slice/crio-666e1a604c524b58e2c3cbab200ef4192514fdbf888d932a1801a4a90d5f7939 WatchSource:0}: Error finding container 666e1a604c524b58e2c3cbab200ef4192514fdbf888d932a1801a4a90d5f7939: Status 404 returned error can't find the container with id 666e1a604c524b58e2c3cbab200ef4192514fdbf888d932a1801a4a90d5f7939 Jan 31 16:35:30 crc kubenswrapper[4730]: I0131 16:35:30.431129 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" event={"ID":"94f6ce60-b51f-46c6-8e44-3c1196a834de","Type":"ContainerStarted","Data":"666e1a604c524b58e2c3cbab200ef4192514fdbf888d932a1801a4a90d5f7939"} Jan 31 16:35:30 crc kubenswrapper[4730]: I0131 16:35:30.458842 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-789bbbcf9f-mttdq"] Jan 31 16:35:30 crc kubenswrapper[4730]: I0131 16:35:30.475195 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02453824-4001-4e3c-8b5c-66c324efb475" path="/var/lib/kubelet/pods/02453824-4001-4e3c-8b5c-66c324efb475/volumes" Jan 31 16:35:30 crc kubenswrapper[4730]: I0131 16:35:30.475987 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="333d46d3-8117-42d9-adb1-84bcaf1bf083" path="/var/lib/kubelet/pods/333d46d3-8117-42d9-adb1-84bcaf1bf083/volumes" Jan 31 16:35:31 crc kubenswrapper[4730]: I0131 16:35:31.451523 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" event={"ID":"94f6ce60-b51f-46c6-8e44-3c1196a834de","Type":"ContainerStarted","Data":"fb9fd2077dd064bb441fca0b1cab9f9e460b7e794725cb5776ea9a06a819ff80"} Jan 31 16:35:31 crc kubenswrapper[4730]: I0131 16:35:31.451857 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:31 crc kubenswrapper[4730]: I0131 16:35:31.454542 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" event={"ID":"f5175ea8-7d42-4d16-9217-a751672fde50","Type":"ContainerStarted","Data":"2e247134c0fc632530150e3f94a59188330d3967b18db0d4701a273c7dd3966c"} Jan 31 16:35:31 crc kubenswrapper[4730]: I0131 16:35:31.454577 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" event={"ID":"f5175ea8-7d42-4d16-9217-a751672fde50","Type":"ContainerStarted","Data":"11fd82661c21cfbe8aae51bcd98111f825539bf34f49da09affa38e99c4ccdad"} Jan 31 16:35:31 crc kubenswrapper[4730]: I0131 16:35:31.455105 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:31 crc kubenswrapper[4730]: I0131 16:35:31.461323 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:31 crc kubenswrapper[4730]: I0131 16:35:31.462396 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:31 crc kubenswrapper[4730]: I0131 16:35:31.492918 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" podStartSLOduration=3.492889212 podStartE2EDuration="3.492889212s" podCreationTimestamp="2026-01-31 16:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:35:31.475019801 +0000 UTC m=+318.281076717" watchObservedRunningTime="2026-01-31 16:35:31.492889212 +0000 UTC m=+318.298946128" Jan 31 16:35:31 crc kubenswrapper[4730]: I0131 16:35:31.537796 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" podStartSLOduration=3.537776816 podStartE2EDuration="3.537776816s" podCreationTimestamp="2026-01-31 16:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:35:31.495654834 +0000 UTC m=+318.301711770" watchObservedRunningTime="2026-01-31 16:35:31.537776816 +0000 UTC m=+318.343833732" Jan 31 16:35:38 crc kubenswrapper[4730]: I0131 16:35:38.765415 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wkn2d"] Jan 31 16:35:38 crc kubenswrapper[4730]: I0131 16:35:38.767432 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wkn2d" Jan 31 16:35:38 crc kubenswrapper[4730]: I0131 16:35:38.769688 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 16:35:38 crc kubenswrapper[4730]: I0131 16:35:38.779375 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wkn2d"] Jan 31 16:35:38 crc kubenswrapper[4730]: I0131 16:35:38.842613 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7qhr\" (UniqueName: \"kubernetes.io/projected/2d741dd8-c85c-4a72-af3f-684820db766f-kube-api-access-s7qhr\") pod \"certified-operators-wkn2d\" (UID: \"2d741dd8-c85c-4a72-af3f-684820db766f\") " pod="openshift-marketplace/certified-operators-wkn2d" Jan 31 16:35:38 crc kubenswrapper[4730]: I0131 16:35:38.842697 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d741dd8-c85c-4a72-af3f-684820db766f-catalog-content\") pod \"certified-operators-wkn2d\" (UID: \"2d741dd8-c85c-4a72-af3f-684820db766f\") " pod="openshift-marketplace/certified-operators-wkn2d" Jan 31 16:35:38 crc kubenswrapper[4730]: I0131 16:35:38.842760 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d741dd8-c85c-4a72-af3f-684820db766f-utilities\") pod \"certified-operators-wkn2d\" (UID: \"2d741dd8-c85c-4a72-af3f-684820db766f\") " pod="openshift-marketplace/certified-operators-wkn2d" Jan 31 16:35:38 crc kubenswrapper[4730]: I0131 16:35:38.944121 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7qhr\" (UniqueName: \"kubernetes.io/projected/2d741dd8-c85c-4a72-af3f-684820db766f-kube-api-access-s7qhr\") pod \"certified-operators-wkn2d\" (UID: \"2d741dd8-c85c-4a72-af3f-684820db766f\") " pod="openshift-marketplace/certified-operators-wkn2d" Jan 31 16:35:38 crc kubenswrapper[4730]: I0131 16:35:38.944215 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d741dd8-c85c-4a72-af3f-684820db766f-catalog-content\") pod \"certified-operators-wkn2d\" (UID: \"2d741dd8-c85c-4a72-af3f-684820db766f\") " pod="openshift-marketplace/certified-operators-wkn2d" Jan 31 16:35:38 crc kubenswrapper[4730]: I0131 16:35:38.944249 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d741dd8-c85c-4a72-af3f-684820db766f-utilities\") pod \"certified-operators-wkn2d\" (UID: \"2d741dd8-c85c-4a72-af3f-684820db766f\") " pod="openshift-marketplace/certified-operators-wkn2d" Jan 31 16:35:38 crc kubenswrapper[4730]: I0131 16:35:38.945074 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d741dd8-c85c-4a72-af3f-684820db766f-catalog-content\") pod \"certified-operators-wkn2d\" (UID: \"2d741dd8-c85c-4a72-af3f-684820db766f\") " pod="openshift-marketplace/certified-operators-wkn2d" Jan 31 16:35:38 crc kubenswrapper[4730]: I0131 16:35:38.945171 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d741dd8-c85c-4a72-af3f-684820db766f-utilities\") pod \"certified-operators-wkn2d\" (UID: \"2d741dd8-c85c-4a72-af3f-684820db766f\") " pod="openshift-marketplace/certified-operators-wkn2d" Jan 31 16:35:38 crc kubenswrapper[4730]: I0131 16:35:38.976195 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xrm4k"] Jan 31 16:35:38 crc kubenswrapper[4730]: I0131 16:35:38.978369 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrm4k" Jan 31 16:35:38 crc kubenswrapper[4730]: I0131 16:35:38.981584 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 16:35:39 crc kubenswrapper[4730]: I0131 16:35:39.000885 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7qhr\" (UniqueName: \"kubernetes.io/projected/2d741dd8-c85c-4a72-af3f-684820db766f-kube-api-access-s7qhr\") pod \"certified-operators-wkn2d\" (UID: \"2d741dd8-c85c-4a72-af3f-684820db766f\") " pod="openshift-marketplace/certified-operators-wkn2d" Jan 31 16:35:39 crc kubenswrapper[4730]: I0131 16:35:39.005325 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xrm4k"] Jan 31 16:35:39 crc kubenswrapper[4730]: I0131 16:35:39.045486 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e150fad-06a0-4be0-a63d-5ca05ea1b1e5-utilities\") pod \"community-operators-xrm4k\" (UID: \"5e150fad-06a0-4be0-a63d-5ca05ea1b1e5\") " pod="openshift-marketplace/community-operators-xrm4k" Jan 31 16:35:39 crc kubenswrapper[4730]: I0131 16:35:39.045845 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e150fad-06a0-4be0-a63d-5ca05ea1b1e5-catalog-content\") pod \"community-operators-xrm4k\" (UID: \"5e150fad-06a0-4be0-a63d-5ca05ea1b1e5\") " pod="openshift-marketplace/community-operators-xrm4k" Jan 31 16:35:39 crc kubenswrapper[4730]: I0131 16:35:39.045997 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r9cc\" (UniqueName: \"kubernetes.io/projected/5e150fad-06a0-4be0-a63d-5ca05ea1b1e5-kube-api-access-9r9cc\") pod \"community-operators-xrm4k\" (UID: \"5e150fad-06a0-4be0-a63d-5ca05ea1b1e5\") " pod="openshift-marketplace/community-operators-xrm4k" Jan 31 16:35:39 crc kubenswrapper[4730]: I0131 16:35:39.082877 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wkn2d" Jan 31 16:35:39 crc kubenswrapper[4730]: I0131 16:35:39.147469 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e150fad-06a0-4be0-a63d-5ca05ea1b1e5-utilities\") pod \"community-operators-xrm4k\" (UID: \"5e150fad-06a0-4be0-a63d-5ca05ea1b1e5\") " pod="openshift-marketplace/community-operators-xrm4k" Jan 31 16:35:39 crc kubenswrapper[4730]: I0131 16:35:39.148093 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e150fad-06a0-4be0-a63d-5ca05ea1b1e5-catalog-content\") pod \"community-operators-xrm4k\" (UID: \"5e150fad-06a0-4be0-a63d-5ca05ea1b1e5\") " pod="openshift-marketplace/community-operators-xrm4k" Jan 31 16:35:39 crc kubenswrapper[4730]: I0131 16:35:39.148233 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r9cc\" (UniqueName: \"kubernetes.io/projected/5e150fad-06a0-4be0-a63d-5ca05ea1b1e5-kube-api-access-9r9cc\") pod \"community-operators-xrm4k\" (UID: \"5e150fad-06a0-4be0-a63d-5ca05ea1b1e5\") " pod="openshift-marketplace/community-operators-xrm4k" Jan 31 16:35:39 crc kubenswrapper[4730]: I0131 16:35:39.148510 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e150fad-06a0-4be0-a63d-5ca05ea1b1e5-utilities\") pod \"community-operators-xrm4k\" (UID: \"5e150fad-06a0-4be0-a63d-5ca05ea1b1e5\") " pod="openshift-marketplace/community-operators-xrm4k" Jan 31 16:35:39 crc kubenswrapper[4730]: I0131 16:35:39.148579 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e150fad-06a0-4be0-a63d-5ca05ea1b1e5-catalog-content\") pod \"community-operators-xrm4k\" (UID: \"5e150fad-06a0-4be0-a63d-5ca05ea1b1e5\") " pod="openshift-marketplace/community-operators-xrm4k" Jan 31 16:35:39 crc kubenswrapper[4730]: I0131 16:35:39.175439 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r9cc\" (UniqueName: \"kubernetes.io/projected/5e150fad-06a0-4be0-a63d-5ca05ea1b1e5-kube-api-access-9r9cc\") pod \"community-operators-xrm4k\" (UID: \"5e150fad-06a0-4be0-a63d-5ca05ea1b1e5\") " pod="openshift-marketplace/community-operators-xrm4k" Jan 31 16:35:39 crc kubenswrapper[4730]: I0131 16:35:39.327878 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrm4k" Jan 31 16:35:39 crc kubenswrapper[4730]: I0131 16:35:39.598761 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wkn2d"] Jan 31 16:35:39 crc kubenswrapper[4730]: I0131 16:35:39.715622 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xrm4k"] Jan 31 16:35:40 crc kubenswrapper[4730]: I0131 16:35:40.501029 4730 generic.go:334] "Generic (PLEG): container finished" podID="5e150fad-06a0-4be0-a63d-5ca05ea1b1e5" containerID="4d1283da8b9d81ea3183a8e41916059ad7c2426b920a27a108304eb37f3fd8c8" exitCode=0 Jan 31 16:35:40 crc kubenswrapper[4730]: I0131 16:35:40.501247 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrm4k" event={"ID":"5e150fad-06a0-4be0-a63d-5ca05ea1b1e5","Type":"ContainerDied","Data":"4d1283da8b9d81ea3183a8e41916059ad7c2426b920a27a108304eb37f3fd8c8"} Jan 31 16:35:40 crc kubenswrapper[4730]: I0131 16:35:40.501335 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrm4k" event={"ID":"5e150fad-06a0-4be0-a63d-5ca05ea1b1e5","Type":"ContainerStarted","Data":"cfb440fc3521a2c9c359931f46f115a797969fda980cbf069ba713f0897ffd06"} Jan 31 16:35:40 crc kubenswrapper[4730]: I0131 16:35:40.504516 4730 generic.go:334] "Generic (PLEG): container finished" podID="2d741dd8-c85c-4a72-af3f-684820db766f" containerID="a023eae3bade853db1e26f572ace2ab7d025c3b4ac597a95d6514603b8153be9" exitCode=0 Jan 31 16:35:40 crc kubenswrapper[4730]: I0131 16:35:40.504554 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wkn2d" event={"ID":"2d741dd8-c85c-4a72-af3f-684820db766f","Type":"ContainerDied","Data":"a023eae3bade853db1e26f572ace2ab7d025c3b4ac597a95d6514603b8153be9"} Jan 31 16:35:40 crc kubenswrapper[4730]: I0131 16:35:40.504583 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wkn2d" event={"ID":"2d741dd8-c85c-4a72-af3f-684820db766f","Type":"ContainerStarted","Data":"b2e1fcad8240b9682dda1a39d0f460b58b1d723539c4c91269cc78761b723acf"} Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.162732 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mjjwq"] Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.163958 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjjwq" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.166222 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.182619 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjjwq"] Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.272083 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e7da571-bfe1-4d2b-b903-1ad7e91743fa-utilities\") pod \"redhat-marketplace-mjjwq\" (UID: \"5e7da571-bfe1-4d2b-b903-1ad7e91743fa\") " pod="openshift-marketplace/redhat-marketplace-mjjwq" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.272131 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e7da571-bfe1-4d2b-b903-1ad7e91743fa-catalog-content\") pod \"redhat-marketplace-mjjwq\" (UID: \"5e7da571-bfe1-4d2b-b903-1ad7e91743fa\") " pod="openshift-marketplace/redhat-marketplace-mjjwq" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.272151 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cvrr\" (UniqueName: \"kubernetes.io/projected/5e7da571-bfe1-4d2b-b903-1ad7e91743fa-kube-api-access-9cvrr\") pod \"redhat-marketplace-mjjwq\" (UID: \"5e7da571-bfe1-4d2b-b903-1ad7e91743fa\") " pod="openshift-marketplace/redhat-marketplace-mjjwq" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.357344 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-shd46"] Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.358200 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shd46" Jan 31 16:35:41 crc kubenswrapper[4730]: W0131 16:35:41.362160 4730 reflector.go:561] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": failed to list *v1.Secret: secrets "redhat-operators-dockercfg-ct8rh" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 31 16:35:41 crc kubenswrapper[4730]: E0131 16:35:41.362200 4730 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-ct8rh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"redhat-operators-dockercfg-ct8rh\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.376482 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d14e024e-91a6-4a1d-be75-7b2588eea935-catalog-content\") pod \"redhat-operators-shd46\" (UID: \"d14e024e-91a6-4a1d-be75-7b2588eea935\") " pod="openshift-marketplace/redhat-operators-shd46" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.376535 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d14e024e-91a6-4a1d-be75-7b2588eea935-utilities\") pod \"redhat-operators-shd46\" (UID: \"d14e024e-91a6-4a1d-be75-7b2588eea935\") " pod="openshift-marketplace/redhat-operators-shd46" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.376574 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e7da571-bfe1-4d2b-b903-1ad7e91743fa-utilities\") pod \"redhat-marketplace-mjjwq\" (UID: \"5e7da571-bfe1-4d2b-b903-1ad7e91743fa\") " pod="openshift-marketplace/redhat-marketplace-mjjwq" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.376608 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e7da571-bfe1-4d2b-b903-1ad7e91743fa-catalog-content\") pod \"redhat-marketplace-mjjwq\" (UID: \"5e7da571-bfe1-4d2b-b903-1ad7e91743fa\") " pod="openshift-marketplace/redhat-marketplace-mjjwq" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.376630 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cvrr\" (UniqueName: \"kubernetes.io/projected/5e7da571-bfe1-4d2b-b903-1ad7e91743fa-kube-api-access-9cvrr\") pod \"redhat-marketplace-mjjwq\" (UID: \"5e7da571-bfe1-4d2b-b903-1ad7e91743fa\") " pod="openshift-marketplace/redhat-marketplace-mjjwq" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.376653 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4mbq\" (UniqueName: \"kubernetes.io/projected/d14e024e-91a6-4a1d-be75-7b2588eea935-kube-api-access-n4mbq\") pod \"redhat-operators-shd46\" (UID: \"d14e024e-91a6-4a1d-be75-7b2588eea935\") " pod="openshift-marketplace/redhat-operators-shd46" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.377103 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e7da571-bfe1-4d2b-b903-1ad7e91743fa-utilities\") pod \"redhat-marketplace-mjjwq\" (UID: \"5e7da571-bfe1-4d2b-b903-1ad7e91743fa\") " pod="openshift-marketplace/redhat-marketplace-mjjwq" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.377310 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e7da571-bfe1-4d2b-b903-1ad7e91743fa-catalog-content\") pod \"redhat-marketplace-mjjwq\" (UID: \"5e7da571-bfe1-4d2b-b903-1ad7e91743fa\") " pod="openshift-marketplace/redhat-marketplace-mjjwq" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.390434 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-shd46"] Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.412795 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cvrr\" (UniqueName: \"kubernetes.io/projected/5e7da571-bfe1-4d2b-b903-1ad7e91743fa-kube-api-access-9cvrr\") pod \"redhat-marketplace-mjjwq\" (UID: \"5e7da571-bfe1-4d2b-b903-1ad7e91743fa\") " pod="openshift-marketplace/redhat-marketplace-mjjwq" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.478500 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4mbq\" (UniqueName: \"kubernetes.io/projected/d14e024e-91a6-4a1d-be75-7b2588eea935-kube-api-access-n4mbq\") pod \"redhat-operators-shd46\" (UID: \"d14e024e-91a6-4a1d-be75-7b2588eea935\") " pod="openshift-marketplace/redhat-operators-shd46" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.478583 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d14e024e-91a6-4a1d-be75-7b2588eea935-catalog-content\") pod \"redhat-operators-shd46\" (UID: \"d14e024e-91a6-4a1d-be75-7b2588eea935\") " pod="openshift-marketplace/redhat-operators-shd46" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.478607 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d14e024e-91a6-4a1d-be75-7b2588eea935-utilities\") pod \"redhat-operators-shd46\" (UID: \"d14e024e-91a6-4a1d-be75-7b2588eea935\") " pod="openshift-marketplace/redhat-operators-shd46" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.479044 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d14e024e-91a6-4a1d-be75-7b2588eea935-utilities\") pod \"redhat-operators-shd46\" (UID: \"d14e024e-91a6-4a1d-be75-7b2588eea935\") " pod="openshift-marketplace/redhat-operators-shd46" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.479207 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d14e024e-91a6-4a1d-be75-7b2588eea935-catalog-content\") pod \"redhat-operators-shd46\" (UID: \"d14e024e-91a6-4a1d-be75-7b2588eea935\") " pod="openshift-marketplace/redhat-operators-shd46" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.496500 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4mbq\" (UniqueName: \"kubernetes.io/projected/d14e024e-91a6-4a1d-be75-7b2588eea935-kube-api-access-n4mbq\") pod \"redhat-operators-shd46\" (UID: \"d14e024e-91a6-4a1d-be75-7b2588eea935\") " pod="openshift-marketplace/redhat-operators-shd46" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.509235 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjjwq" Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.509948 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wkn2d" event={"ID":"2d741dd8-c85c-4a72-af3f-684820db766f","Type":"ContainerStarted","Data":"aaea0ff821bb749a7c271fe0579c27b80a4d66130e020b523e7906c5f5401516"} Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.517576 4730 generic.go:334] "Generic (PLEG): container finished" podID="5e150fad-06a0-4be0-a63d-5ca05ea1b1e5" containerID="49d2fa178c20c08dc3f9bc000bc67bde239022f175c84ce41d9f79d115330687" exitCode=0 Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.517632 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrm4k" event={"ID":"5e150fad-06a0-4be0-a63d-5ca05ea1b1e5","Type":"ContainerDied","Data":"49d2fa178c20c08dc3f9bc000bc67bde239022f175c84ce41d9f79d115330687"} Jan 31 16:35:41 crc kubenswrapper[4730]: I0131 16:35:41.976908 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjjwq"] Jan 31 16:35:41 crc kubenswrapper[4730]: W0131 16:35:41.991545 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e7da571_bfe1_4d2b_b903_1ad7e91743fa.slice/crio-e67248c50d6dd6e47c8fd3e2771d0a51b38fe1ae785214fc19bfcad69f36eb6f WatchSource:0}: Error finding container e67248c50d6dd6e47c8fd3e2771d0a51b38fe1ae785214fc19bfcad69f36eb6f: Status 404 returned error can't find the container with id e67248c50d6dd6e47c8fd3e2771d0a51b38fe1ae785214fc19bfcad69f36eb6f Jan 31 16:35:42 crc kubenswrapper[4730]: I0131 16:35:42.355005 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 16:35:42 crc kubenswrapper[4730]: I0131 16:35:42.356092 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shd46" Jan 31 16:35:42 crc kubenswrapper[4730]: I0131 16:35:42.533298 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrm4k" event={"ID":"5e150fad-06a0-4be0-a63d-5ca05ea1b1e5","Type":"ContainerStarted","Data":"0e763f060db130bb0baed28459714aa4182aa514d230e87308e58fff9dfe37a0"} Jan 31 16:35:42 crc kubenswrapper[4730]: I0131 16:35:42.535504 4730 generic.go:334] "Generic (PLEG): container finished" podID="2d741dd8-c85c-4a72-af3f-684820db766f" containerID="aaea0ff821bb749a7c271fe0579c27b80a4d66130e020b523e7906c5f5401516" exitCode=0 Jan 31 16:35:42 crc kubenswrapper[4730]: I0131 16:35:42.535581 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wkn2d" event={"ID":"2d741dd8-c85c-4a72-af3f-684820db766f","Type":"ContainerDied","Data":"aaea0ff821bb749a7c271fe0579c27b80a4d66130e020b523e7906c5f5401516"} Jan 31 16:35:42 crc kubenswrapper[4730]: I0131 16:35:42.537281 4730 generic.go:334] "Generic (PLEG): container finished" podID="5e7da571-bfe1-4d2b-b903-1ad7e91743fa" containerID="e56203a203c152c19803ada9fdf881b9e2069bf681fa85e59e899601f26bf2b8" exitCode=0 Jan 31 16:35:42 crc kubenswrapper[4730]: I0131 16:35:42.537323 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjjwq" event={"ID":"5e7da571-bfe1-4d2b-b903-1ad7e91743fa","Type":"ContainerDied","Data":"e56203a203c152c19803ada9fdf881b9e2069bf681fa85e59e899601f26bf2b8"} Jan 31 16:35:42 crc kubenswrapper[4730]: I0131 16:35:42.537350 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjjwq" event={"ID":"5e7da571-bfe1-4d2b-b903-1ad7e91743fa","Type":"ContainerStarted","Data":"e67248c50d6dd6e47c8fd3e2771d0a51b38fe1ae785214fc19bfcad69f36eb6f"} Jan 31 16:35:42 crc kubenswrapper[4730]: I0131 16:35:42.554112 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xrm4k" podStartSLOduration=3.139861827 podStartE2EDuration="4.554094572s" podCreationTimestamp="2026-01-31 16:35:38 +0000 UTC" firstStartedPulling="2026-01-31 16:35:40.502377914 +0000 UTC m=+327.308434830" lastFinishedPulling="2026-01-31 16:35:41.916610659 +0000 UTC m=+328.722667575" observedRunningTime="2026-01-31 16:35:42.551105482 +0000 UTC m=+329.357162398" watchObservedRunningTime="2026-01-31 16:35:42.554094572 +0000 UTC m=+329.360151488" Jan 31 16:35:42 crc kubenswrapper[4730]: I0131 16:35:42.756463 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-shd46"] Jan 31 16:35:42 crc kubenswrapper[4730]: W0131 16:35:42.761823 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd14e024e_91a6_4a1d_be75_7b2588eea935.slice/crio-9ead117f70d4a10832ae9a432024991e9677814dd06032132adca8e389a53148 WatchSource:0}: Error finding container 9ead117f70d4a10832ae9a432024991e9677814dd06032132adca8e389a53148: Status 404 returned error can't find the container with id 9ead117f70d4a10832ae9a432024991e9677814dd06032132adca8e389a53148 Jan 31 16:35:43 crc kubenswrapper[4730]: I0131 16:35:43.554564 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wkn2d" event={"ID":"2d741dd8-c85c-4a72-af3f-684820db766f","Type":"ContainerStarted","Data":"1d7e3285a4771f75a3ed7c4e4dd4095390002fcae1164f13c49773ecf888f6b6"} Jan 31 16:35:43 crc kubenswrapper[4730]: I0131 16:35:43.556679 4730 generic.go:334] "Generic (PLEG): container finished" podID="d14e024e-91a6-4a1d-be75-7b2588eea935" containerID="28ddd21bb071d3daf234fdfa603a5f1717fbd510f5bd6729acb6899562f4b853" exitCode=0 Jan 31 16:35:43 crc kubenswrapper[4730]: I0131 16:35:43.557842 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shd46" event={"ID":"d14e024e-91a6-4a1d-be75-7b2588eea935","Type":"ContainerDied","Data":"28ddd21bb071d3daf234fdfa603a5f1717fbd510f5bd6729acb6899562f4b853"} Jan 31 16:35:43 crc kubenswrapper[4730]: I0131 16:35:43.557870 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shd46" event={"ID":"d14e024e-91a6-4a1d-be75-7b2588eea935","Type":"ContainerStarted","Data":"9ead117f70d4a10832ae9a432024991e9677814dd06032132adca8e389a53148"} Jan 31 16:35:43 crc kubenswrapper[4730]: I0131 16:35:43.578388 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wkn2d" podStartSLOduration=3.142362183 podStartE2EDuration="5.578360463s" podCreationTimestamp="2026-01-31 16:35:38 +0000 UTC" firstStartedPulling="2026-01-31 16:35:40.505623542 +0000 UTC m=+327.311680448" lastFinishedPulling="2026-01-31 16:35:42.941621802 +0000 UTC m=+329.747678728" observedRunningTime="2026-01-31 16:35:43.576014603 +0000 UTC m=+330.382071529" watchObservedRunningTime="2026-01-31 16:35:43.578360463 +0000 UTC m=+330.384417399" Jan 31 16:35:44 crc kubenswrapper[4730]: I0131 16:35:44.563966 4730 generic.go:334] "Generic (PLEG): container finished" podID="5e7da571-bfe1-4d2b-b903-1ad7e91743fa" containerID="aa60db4f0e29e8bb6453e8069c28805216e9b4fb90beceef8d2feaa46dea9c1d" exitCode=0 Jan 31 16:35:44 crc kubenswrapper[4730]: I0131 16:35:44.564055 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjjwq" event={"ID":"5e7da571-bfe1-4d2b-b903-1ad7e91743fa","Type":"ContainerDied","Data":"aa60db4f0e29e8bb6453e8069c28805216e9b4fb90beceef8d2feaa46dea9c1d"} Jan 31 16:35:44 crc kubenswrapper[4730]: I0131 16:35:44.567732 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shd46" event={"ID":"d14e024e-91a6-4a1d-be75-7b2588eea935","Type":"ContainerStarted","Data":"1d00d145cad5c21727fc35bc0b8c46134836eb412e40dd2dc8906c5b1185d667"} Jan 31 16:35:45 crc kubenswrapper[4730]: I0131 16:35:45.574396 4730 generic.go:334] "Generic (PLEG): container finished" podID="d14e024e-91a6-4a1d-be75-7b2588eea935" containerID="1d00d145cad5c21727fc35bc0b8c46134836eb412e40dd2dc8906c5b1185d667" exitCode=0 Jan 31 16:35:45 crc kubenswrapper[4730]: I0131 16:35:45.574509 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shd46" event={"ID":"d14e024e-91a6-4a1d-be75-7b2588eea935","Type":"ContainerDied","Data":"1d00d145cad5c21727fc35bc0b8c46134836eb412e40dd2dc8906c5b1185d667"} Jan 31 16:35:45 crc kubenswrapper[4730]: I0131 16:35:45.580381 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjjwq" event={"ID":"5e7da571-bfe1-4d2b-b903-1ad7e91743fa","Type":"ContainerStarted","Data":"8913f41a7865944f0f1019b1cb32a1a9d830e044902b0532080ff386c2400e5d"} Jan 31 16:35:45 crc kubenswrapper[4730]: I0131 16:35:45.625202 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mjjwq" podStartSLOduration=2.235813132 podStartE2EDuration="4.625182993s" podCreationTimestamp="2026-01-31 16:35:41 +0000 UTC" firstStartedPulling="2026-01-31 16:35:42.54144734 +0000 UTC m=+329.347504266" lastFinishedPulling="2026-01-31 16:35:44.930817191 +0000 UTC m=+331.736874127" observedRunningTime="2026-01-31 16:35:45.620594395 +0000 UTC m=+332.426651341" watchObservedRunningTime="2026-01-31 16:35:45.625182993 +0000 UTC m=+332.431239919" Jan 31 16:35:46 crc kubenswrapper[4730]: I0131 16:35:46.588724 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shd46" event={"ID":"d14e024e-91a6-4a1d-be75-7b2588eea935","Type":"ContainerStarted","Data":"a21b595c99541062e619eac524c49c4d23fad8dbd9cb1d9020ed61d7a692119e"} Jan 31 16:35:46 crc kubenswrapper[4730]: I0131 16:35:46.619006 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-shd46" podStartSLOduration=3.206303678 podStartE2EDuration="5.618985534s" podCreationTimestamp="2026-01-31 16:35:41 +0000 UTC" firstStartedPulling="2026-01-31 16:35:43.558328728 +0000 UTC m=+330.364385654" lastFinishedPulling="2026-01-31 16:35:45.971010604 +0000 UTC m=+332.777067510" observedRunningTime="2026-01-31 16:35:46.612924161 +0000 UTC m=+333.418981087" watchObservedRunningTime="2026-01-31 16:35:46.618985534 +0000 UTC m=+333.425042460" Jan 31 16:35:46 crc kubenswrapper[4730]: I0131 16:35:46.799490 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-789bbbcf9f-mttdq"] Jan 31 16:35:46 crc kubenswrapper[4730]: I0131 16:35:46.799694 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" podUID="f5175ea8-7d42-4d16-9217-a751672fde50" containerName="controller-manager" containerID="cri-o://2e247134c0fc632530150e3f94a59188330d3967b18db0d4701a273c7dd3966c" gracePeriod=30 Jan 31 16:35:46 crc kubenswrapper[4730]: I0131 16:35:46.814547 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq"] Jan 31 16:35:46 crc kubenswrapper[4730]: I0131 16:35:46.814730 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" podUID="94f6ce60-b51f-46c6-8e44-3c1196a834de" containerName="route-controller-manager" containerID="cri-o://fb9fd2077dd064bb441fca0b1cab9f9e460b7e794725cb5776ea9a06a819ff80" gracePeriod=30 Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.319112 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.350083 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzxjk\" (UniqueName: \"kubernetes.io/projected/94f6ce60-b51f-46c6-8e44-3c1196a834de-kube-api-access-vzxjk\") pod \"94f6ce60-b51f-46c6-8e44-3c1196a834de\" (UID: \"94f6ce60-b51f-46c6-8e44-3c1196a834de\") " Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.350182 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94f6ce60-b51f-46c6-8e44-3c1196a834de-serving-cert\") pod \"94f6ce60-b51f-46c6-8e44-3c1196a834de\" (UID: \"94f6ce60-b51f-46c6-8e44-3c1196a834de\") " Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.350342 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94f6ce60-b51f-46c6-8e44-3c1196a834de-client-ca\") pod \"94f6ce60-b51f-46c6-8e44-3c1196a834de\" (UID: \"94f6ce60-b51f-46c6-8e44-3c1196a834de\") " Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.350376 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94f6ce60-b51f-46c6-8e44-3c1196a834de-config\") pod \"94f6ce60-b51f-46c6-8e44-3c1196a834de\" (UID: \"94f6ce60-b51f-46c6-8e44-3c1196a834de\") " Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.351175 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94f6ce60-b51f-46c6-8e44-3c1196a834de-client-ca" (OuterVolumeSpecName: "client-ca") pod "94f6ce60-b51f-46c6-8e44-3c1196a834de" (UID: "94f6ce60-b51f-46c6-8e44-3c1196a834de"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.351506 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94f6ce60-b51f-46c6-8e44-3c1196a834de-config" (OuterVolumeSpecName: "config") pod "94f6ce60-b51f-46c6-8e44-3c1196a834de" (UID: "94f6ce60-b51f-46c6-8e44-3c1196a834de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.355961 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94f6ce60-b51f-46c6-8e44-3c1196a834de-kube-api-access-vzxjk" (OuterVolumeSpecName: "kube-api-access-vzxjk") pod "94f6ce60-b51f-46c6-8e44-3c1196a834de" (UID: "94f6ce60-b51f-46c6-8e44-3c1196a834de"). InnerVolumeSpecName "kube-api-access-vzxjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.356168 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94f6ce60-b51f-46c6-8e44-3c1196a834de-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "94f6ce60-b51f-46c6-8e44-3c1196a834de" (UID: "94f6ce60-b51f-46c6-8e44-3c1196a834de"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.402755 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.452139 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5175ea8-7d42-4d16-9217-a751672fde50-serving-cert\") pod \"f5175ea8-7d42-4d16-9217-a751672fde50\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.452230 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-config\") pod \"f5175ea8-7d42-4d16-9217-a751672fde50\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.452287 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-proxy-ca-bundles\") pod \"f5175ea8-7d42-4d16-9217-a751672fde50\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.452304 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-client-ca\") pod \"f5175ea8-7d42-4d16-9217-a751672fde50\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.452387 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5cfg\" (UniqueName: \"kubernetes.io/projected/f5175ea8-7d42-4d16-9217-a751672fde50-kube-api-access-s5cfg\") pod \"f5175ea8-7d42-4d16-9217-a751672fde50\" (UID: \"f5175ea8-7d42-4d16-9217-a751672fde50\") " Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.452663 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzxjk\" (UniqueName: \"kubernetes.io/projected/94f6ce60-b51f-46c6-8e44-3c1196a834de-kube-api-access-vzxjk\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.452680 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94f6ce60-b51f-46c6-8e44-3c1196a834de-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.452690 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94f6ce60-b51f-46c6-8e44-3c1196a834de-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.452699 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94f6ce60-b51f-46c6-8e44-3c1196a834de-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.453219 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f5175ea8-7d42-4d16-9217-a751672fde50" (UID: "f5175ea8-7d42-4d16-9217-a751672fde50"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.453243 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-client-ca" (OuterVolumeSpecName: "client-ca") pod "f5175ea8-7d42-4d16-9217-a751672fde50" (UID: "f5175ea8-7d42-4d16-9217-a751672fde50"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.453283 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-config" (OuterVolumeSpecName: "config") pod "f5175ea8-7d42-4d16-9217-a751672fde50" (UID: "f5175ea8-7d42-4d16-9217-a751672fde50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.457944 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5175ea8-7d42-4d16-9217-a751672fde50-kube-api-access-s5cfg" (OuterVolumeSpecName: "kube-api-access-s5cfg") pod "f5175ea8-7d42-4d16-9217-a751672fde50" (UID: "f5175ea8-7d42-4d16-9217-a751672fde50"). InnerVolumeSpecName "kube-api-access-s5cfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.458024 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5175ea8-7d42-4d16-9217-a751672fde50-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f5175ea8-7d42-4d16-9217-a751672fde50" (UID: "f5175ea8-7d42-4d16-9217-a751672fde50"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.553898 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5175ea8-7d42-4d16-9217-a751672fde50-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.554932 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.554946 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.554962 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5175ea8-7d42-4d16-9217-a751672fde50-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.554976 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5cfg\" (UniqueName: \"kubernetes.io/projected/f5175ea8-7d42-4d16-9217-a751672fde50-kube-api-access-s5cfg\") on node \"crc\" DevicePath \"\"" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.594657 4730 generic.go:334] "Generic (PLEG): container finished" podID="94f6ce60-b51f-46c6-8e44-3c1196a834de" containerID="fb9fd2077dd064bb441fca0b1cab9f9e460b7e794725cb5776ea9a06a819ff80" exitCode=0 Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.594757 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" event={"ID":"94f6ce60-b51f-46c6-8e44-3c1196a834de","Type":"ContainerDied","Data":"fb9fd2077dd064bb441fca0b1cab9f9e460b7e794725cb5776ea9a06a819ff80"} Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.594849 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" event={"ID":"94f6ce60-b51f-46c6-8e44-3c1196a834de","Type":"ContainerDied","Data":"666e1a604c524b58e2c3cbab200ef4192514fdbf888d932a1801a4a90d5f7939"} Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.594791 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.594890 4730 scope.go:117] "RemoveContainer" containerID="fb9fd2077dd064bb441fca0b1cab9f9e460b7e794725cb5776ea9a06a819ff80" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.596386 4730 generic.go:334] "Generic (PLEG): container finished" podID="f5175ea8-7d42-4d16-9217-a751672fde50" containerID="2e247134c0fc632530150e3f94a59188330d3967b18db0d4701a273c7dd3966c" exitCode=0 Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.596510 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" event={"ID":"f5175ea8-7d42-4d16-9217-a751672fde50","Type":"ContainerDied","Data":"2e247134c0fc632530150e3f94a59188330d3967b18db0d4701a273c7dd3966c"} Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.596557 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" event={"ID":"f5175ea8-7d42-4d16-9217-a751672fde50","Type":"ContainerDied","Data":"11fd82661c21cfbe8aae51bcd98111f825539bf34f49da09affa38e99c4ccdad"} Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.596506 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-789bbbcf9f-mttdq" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.609147 4730 scope.go:117] "RemoveContainer" containerID="fb9fd2077dd064bb441fca0b1cab9f9e460b7e794725cb5776ea9a06a819ff80" Jan 31 16:35:47 crc kubenswrapper[4730]: E0131 16:35:47.609862 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb9fd2077dd064bb441fca0b1cab9f9e460b7e794725cb5776ea9a06a819ff80\": container with ID starting with fb9fd2077dd064bb441fca0b1cab9f9e460b7e794725cb5776ea9a06a819ff80 not found: ID does not exist" containerID="fb9fd2077dd064bb441fca0b1cab9f9e460b7e794725cb5776ea9a06a819ff80" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.609903 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb9fd2077dd064bb441fca0b1cab9f9e460b7e794725cb5776ea9a06a819ff80"} err="failed to get container status \"fb9fd2077dd064bb441fca0b1cab9f9e460b7e794725cb5776ea9a06a819ff80\": rpc error: code = NotFound desc = could not find container \"fb9fd2077dd064bb441fca0b1cab9f9e460b7e794725cb5776ea9a06a819ff80\": container with ID starting with fb9fd2077dd064bb441fca0b1cab9f9e460b7e794725cb5776ea9a06a819ff80 not found: ID does not exist" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.609929 4730 scope.go:117] "RemoveContainer" containerID="2e247134c0fc632530150e3f94a59188330d3967b18db0d4701a273c7dd3966c" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.627460 4730 scope.go:117] "RemoveContainer" containerID="2e247134c0fc632530150e3f94a59188330d3967b18db0d4701a273c7dd3966c" Jan 31 16:35:47 crc kubenswrapper[4730]: E0131 16:35:47.627916 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e247134c0fc632530150e3f94a59188330d3967b18db0d4701a273c7dd3966c\": container with ID starting with 2e247134c0fc632530150e3f94a59188330d3967b18db0d4701a273c7dd3966c not found: ID does not exist" containerID="2e247134c0fc632530150e3f94a59188330d3967b18db0d4701a273c7dd3966c" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.627948 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e247134c0fc632530150e3f94a59188330d3967b18db0d4701a273c7dd3966c"} err="failed to get container status \"2e247134c0fc632530150e3f94a59188330d3967b18db0d4701a273c7dd3966c\": rpc error: code = NotFound desc = could not find container \"2e247134c0fc632530150e3f94a59188330d3967b18db0d4701a273c7dd3966c\": container with ID starting with 2e247134c0fc632530150e3f94a59188330d3967b18db0d4701a273c7dd3966c not found: ID does not exist" Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.631476 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-789bbbcf9f-mttdq"] Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.635951 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-789bbbcf9f-mttdq"] Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.652249 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq"] Jan 31 16:35:47 crc kubenswrapper[4730]: I0131 16:35:47.671856 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5764494d47-bqfzq"] Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.474403 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94f6ce60-b51f-46c6-8e44-3c1196a834de" path="/var/lib/kubelet/pods/94f6ce60-b51f-46c6-8e44-3c1196a834de/volumes" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.475662 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5175ea8-7d42-4d16-9217-a751672fde50" path="/var/lib/kubelet/pods/f5175ea8-7d42-4d16-9217-a751672fde50/volumes" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.807405 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz"] Jan 31 16:35:48 crc kubenswrapper[4730]: E0131 16:35:48.807758 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f6ce60-b51f-46c6-8e44-3c1196a834de" containerName="route-controller-manager" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.807782 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f6ce60-b51f-46c6-8e44-3c1196a834de" containerName="route-controller-manager" Jan 31 16:35:48 crc kubenswrapper[4730]: E0131 16:35:48.807798 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5175ea8-7d42-4d16-9217-a751672fde50" containerName="controller-manager" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.807844 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5175ea8-7d42-4d16-9217-a751672fde50" containerName="controller-manager" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.808014 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="94f6ce60-b51f-46c6-8e44-3c1196a834de" containerName="route-controller-manager" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.808042 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5175ea8-7d42-4d16-9217-a751672fde50" containerName="controller-manager" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.808614 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.812207 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.812871 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.812972 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.813013 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.813118 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.815849 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.820368 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.828329 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq"] Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.829251 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.833914 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.834771 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.836129 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.836138 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.836392 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.836457 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.840396 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz"] Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.865451 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq"] Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.884189 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17cf691a-2c81-48eb-9fe6-d9971fa1bc55-config\") pod \"controller-manager-6c8d7bdf95-k4zlz\" (UID: \"17cf691a-2c81-48eb-9fe6-d9971fa1bc55\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.884232 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/193cc7fb-c4cf-4e28-bfdf-c845ad8af99a-client-ca\") pod \"route-controller-manager-59d67c4f7-xmjqq\" (UID: \"193cc7fb-c4cf-4e28-bfdf-c845ad8af99a\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.884279 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nffc\" (UniqueName: \"kubernetes.io/projected/17cf691a-2c81-48eb-9fe6-d9971fa1bc55-kube-api-access-7nffc\") pod \"controller-manager-6c8d7bdf95-k4zlz\" (UID: \"17cf691a-2c81-48eb-9fe6-d9971fa1bc55\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.884318 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/193cc7fb-c4cf-4e28-bfdf-c845ad8af99a-config\") pod \"route-controller-manager-59d67c4f7-xmjqq\" (UID: \"193cc7fb-c4cf-4e28-bfdf-c845ad8af99a\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.884337 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l68th\" (UniqueName: \"kubernetes.io/projected/193cc7fb-c4cf-4e28-bfdf-c845ad8af99a-kube-api-access-l68th\") pod \"route-controller-manager-59d67c4f7-xmjqq\" (UID: \"193cc7fb-c4cf-4e28-bfdf-c845ad8af99a\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.884361 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17cf691a-2c81-48eb-9fe6-d9971fa1bc55-serving-cert\") pod \"controller-manager-6c8d7bdf95-k4zlz\" (UID: \"17cf691a-2c81-48eb-9fe6-d9971fa1bc55\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.884406 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17cf691a-2c81-48eb-9fe6-d9971fa1bc55-client-ca\") pod \"controller-manager-6c8d7bdf95-k4zlz\" (UID: \"17cf691a-2c81-48eb-9fe6-d9971fa1bc55\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.884441 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17cf691a-2c81-48eb-9fe6-d9971fa1bc55-proxy-ca-bundles\") pod \"controller-manager-6c8d7bdf95-k4zlz\" (UID: \"17cf691a-2c81-48eb-9fe6-d9971fa1bc55\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.884459 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/193cc7fb-c4cf-4e28-bfdf-c845ad8af99a-serving-cert\") pod \"route-controller-manager-59d67c4f7-xmjqq\" (UID: \"193cc7fb-c4cf-4e28-bfdf-c845ad8af99a\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.985842 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17cf691a-2c81-48eb-9fe6-d9971fa1bc55-client-ca\") pod \"controller-manager-6c8d7bdf95-k4zlz\" (UID: \"17cf691a-2c81-48eb-9fe6-d9971fa1bc55\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.985901 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17cf691a-2c81-48eb-9fe6-d9971fa1bc55-proxy-ca-bundles\") pod \"controller-manager-6c8d7bdf95-k4zlz\" (UID: \"17cf691a-2c81-48eb-9fe6-d9971fa1bc55\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.985926 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/193cc7fb-c4cf-4e28-bfdf-c845ad8af99a-serving-cert\") pod \"route-controller-manager-59d67c4f7-xmjqq\" (UID: \"193cc7fb-c4cf-4e28-bfdf-c845ad8af99a\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.985968 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17cf691a-2c81-48eb-9fe6-d9971fa1bc55-config\") pod \"controller-manager-6c8d7bdf95-k4zlz\" (UID: \"17cf691a-2c81-48eb-9fe6-d9971fa1bc55\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.985999 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/193cc7fb-c4cf-4e28-bfdf-c845ad8af99a-client-ca\") pod \"route-controller-manager-59d67c4f7-xmjqq\" (UID: \"193cc7fb-c4cf-4e28-bfdf-c845ad8af99a\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.986023 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nffc\" (UniqueName: \"kubernetes.io/projected/17cf691a-2c81-48eb-9fe6-d9971fa1bc55-kube-api-access-7nffc\") pod \"controller-manager-6c8d7bdf95-k4zlz\" (UID: \"17cf691a-2c81-48eb-9fe6-d9971fa1bc55\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.986063 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/193cc7fb-c4cf-4e28-bfdf-c845ad8af99a-config\") pod \"route-controller-manager-59d67c4f7-xmjqq\" (UID: \"193cc7fb-c4cf-4e28-bfdf-c845ad8af99a\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.986085 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l68th\" (UniqueName: \"kubernetes.io/projected/193cc7fb-c4cf-4e28-bfdf-c845ad8af99a-kube-api-access-l68th\") pod \"route-controller-manager-59d67c4f7-xmjqq\" (UID: \"193cc7fb-c4cf-4e28-bfdf-c845ad8af99a\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.986110 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17cf691a-2c81-48eb-9fe6-d9971fa1bc55-serving-cert\") pod \"controller-manager-6c8d7bdf95-k4zlz\" (UID: \"17cf691a-2c81-48eb-9fe6-d9971fa1bc55\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.987517 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/193cc7fb-c4cf-4e28-bfdf-c845ad8af99a-config\") pod \"route-controller-manager-59d67c4f7-xmjqq\" (UID: \"193cc7fb-c4cf-4e28-bfdf-c845ad8af99a\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.987547 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17cf691a-2c81-48eb-9fe6-d9971fa1bc55-config\") pod \"controller-manager-6c8d7bdf95-k4zlz\" (UID: \"17cf691a-2c81-48eb-9fe6-d9971fa1bc55\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.987728 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/193cc7fb-c4cf-4e28-bfdf-c845ad8af99a-client-ca\") pod \"route-controller-manager-59d67c4f7-xmjqq\" (UID: \"193cc7fb-c4cf-4e28-bfdf-c845ad8af99a\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.987847 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17cf691a-2c81-48eb-9fe6-d9971fa1bc55-proxy-ca-bundles\") pod \"controller-manager-6c8d7bdf95-k4zlz\" (UID: \"17cf691a-2c81-48eb-9fe6-d9971fa1bc55\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.988089 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17cf691a-2c81-48eb-9fe6-d9971fa1bc55-client-ca\") pod \"controller-manager-6c8d7bdf95-k4zlz\" (UID: \"17cf691a-2c81-48eb-9fe6-d9971fa1bc55\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.990436 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17cf691a-2c81-48eb-9fe6-d9971fa1bc55-serving-cert\") pod \"controller-manager-6c8d7bdf95-k4zlz\" (UID: \"17cf691a-2c81-48eb-9fe6-d9971fa1bc55\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:48 crc kubenswrapper[4730]: I0131 16:35:48.996541 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/193cc7fb-c4cf-4e28-bfdf-c845ad8af99a-serving-cert\") pod \"route-controller-manager-59d67c4f7-xmjqq\" (UID: \"193cc7fb-c4cf-4e28-bfdf-c845ad8af99a\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" Jan 31 16:35:49 crc kubenswrapper[4730]: I0131 16:35:49.012860 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l68th\" (UniqueName: \"kubernetes.io/projected/193cc7fb-c4cf-4e28-bfdf-c845ad8af99a-kube-api-access-l68th\") pod \"route-controller-manager-59d67c4f7-xmjqq\" (UID: \"193cc7fb-c4cf-4e28-bfdf-c845ad8af99a\") " pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" Jan 31 16:35:49 crc kubenswrapper[4730]: I0131 16:35:49.016436 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nffc\" (UniqueName: \"kubernetes.io/projected/17cf691a-2c81-48eb-9fe6-d9971fa1bc55-kube-api-access-7nffc\") pod \"controller-manager-6c8d7bdf95-k4zlz\" (UID: \"17cf691a-2c81-48eb-9fe6-d9971fa1bc55\") " pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:49 crc kubenswrapper[4730]: I0131 16:35:49.083565 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wkn2d" Jan 31 16:35:49 crc kubenswrapper[4730]: I0131 16:35:49.083612 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wkn2d" Jan 31 16:35:49 crc kubenswrapper[4730]: I0131 16:35:49.126470 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wkn2d" Jan 31 16:35:49 crc kubenswrapper[4730]: I0131 16:35:49.132467 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:49 crc kubenswrapper[4730]: I0131 16:35:49.147307 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" Jan 31 16:35:49 crc kubenswrapper[4730]: I0131 16:35:49.329566 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xrm4k" Jan 31 16:35:49 crc kubenswrapper[4730]: I0131 16:35:49.329604 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xrm4k" Jan 31 16:35:49 crc kubenswrapper[4730]: I0131 16:35:49.385724 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xrm4k" Jan 31 16:35:49 crc kubenswrapper[4730]: I0131 16:35:49.547853 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz"] Jan 31 16:35:49 crc kubenswrapper[4730]: I0131 16:35:49.605504 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq"] Jan 31 16:35:49 crc kubenswrapper[4730]: I0131 16:35:49.615681 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" event={"ID":"17cf691a-2c81-48eb-9fe6-d9971fa1bc55","Type":"ContainerStarted","Data":"b3e1fe51b4b917c731e82db89a71ad85631e5cabcf858e4a5a2a02297825079b"} Jan 31 16:35:49 crc kubenswrapper[4730]: I0131 16:35:49.675624 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wkn2d" Jan 31 16:35:49 crc kubenswrapper[4730]: I0131 16:35:49.684387 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xrm4k" Jan 31 16:35:50 crc kubenswrapper[4730]: I0131 16:35:50.623852 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" event={"ID":"17cf691a-2c81-48eb-9fe6-d9971fa1bc55","Type":"ContainerStarted","Data":"759ebd5d0c6a686d93089128f38fc271289df8d3ddcaa519fd1eecfdb9925489"} Jan 31 16:35:50 crc kubenswrapper[4730]: I0131 16:35:50.624283 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:50 crc kubenswrapper[4730]: I0131 16:35:50.625524 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" event={"ID":"193cc7fb-c4cf-4e28-bfdf-c845ad8af99a","Type":"ContainerStarted","Data":"79f37c792f5b62625b3dafee48d72a28488ab681870356f956c2bf6f8c55bbae"} Jan 31 16:35:50 crc kubenswrapper[4730]: I0131 16:35:50.625663 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" event={"ID":"193cc7fb-c4cf-4e28-bfdf-c845ad8af99a","Type":"ContainerStarted","Data":"dce21f6a981fe6867c672df2470e972f365334e2d42afac2b077f54722d27e05"} Jan 31 16:35:50 crc kubenswrapper[4730]: I0131 16:35:50.629821 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" Jan 31 16:35:50 crc kubenswrapper[4730]: I0131 16:35:50.672433 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" podStartSLOduration=4.6724158 podStartE2EDuration="4.6724158s" podCreationTimestamp="2026-01-31 16:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:35:50.667670357 +0000 UTC m=+337.473727273" watchObservedRunningTime="2026-01-31 16:35:50.6724158 +0000 UTC m=+337.478472716" Jan 31 16:35:50 crc kubenswrapper[4730]: I0131 16:35:50.675229 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6c8d7bdf95-k4zlz" podStartSLOduration=4.675218075 podStartE2EDuration="4.675218075s" podCreationTimestamp="2026-01-31 16:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:35:50.64990973 +0000 UTC m=+337.455966666" watchObservedRunningTime="2026-01-31 16:35:50.675218075 +0000 UTC m=+337.481275011" Jan 31 16:35:51 crc kubenswrapper[4730]: I0131 16:35:51.509747 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mjjwq" Jan 31 16:35:51 crc kubenswrapper[4730]: I0131 16:35:51.510725 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mjjwq" Jan 31 16:35:51 crc kubenswrapper[4730]: I0131 16:35:51.557830 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mjjwq" Jan 31 16:35:51 crc kubenswrapper[4730]: I0131 16:35:51.630448 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" Jan 31 16:35:51 crc kubenswrapper[4730]: I0131 16:35:51.635885 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59d67c4f7-xmjqq" Jan 31 16:35:51 crc kubenswrapper[4730]: I0131 16:35:51.694372 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mjjwq" Jan 31 16:35:52 crc kubenswrapper[4730]: I0131 16:35:52.357056 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-shd46" Jan 31 16:35:52 crc kubenswrapper[4730]: I0131 16:35:52.357132 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-shd46" Jan 31 16:35:53 crc kubenswrapper[4730]: I0131 16:35:53.403879 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-shd46" podUID="d14e024e-91a6-4a1d-be75-7b2588eea935" containerName="registry-server" probeResult="failure" output=< Jan 31 16:35:53 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Jan 31 16:35:53 crc kubenswrapper[4730]: > Jan 31 16:36:02 crc kubenswrapper[4730]: I0131 16:36:02.417200 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-shd46" Jan 31 16:36:02 crc kubenswrapper[4730]: I0131 16:36:02.487044 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-shd46" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.013920 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-587t4"] Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.015841 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.046847 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-587t4"] Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.105058 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ad0521a-780c-4483-8f36-e288ce7898b3-trusted-ca\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.105143 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8ad0521a-780c-4483-8f36-e288ce7898b3-registry-certificates\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.105176 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8ad0521a-780c-4483-8f36-e288ce7898b3-registry-tls\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.105203 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rggg\" (UniqueName: \"kubernetes.io/projected/8ad0521a-780c-4483-8f36-e288ce7898b3-kube-api-access-6rggg\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.105259 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8ad0521a-780c-4483-8f36-e288ce7898b3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.105292 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8ad0521a-780c-4483-8f36-e288ce7898b3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.105321 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ad0521a-780c-4483-8f36-e288ce7898b3-bound-sa-token\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.105581 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.134905 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.207101 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ad0521a-780c-4483-8f36-e288ce7898b3-trusted-ca\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.207358 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8ad0521a-780c-4483-8f36-e288ce7898b3-registry-certificates\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.207478 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8ad0521a-780c-4483-8f36-e288ce7898b3-registry-tls\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.207566 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rggg\" (UniqueName: \"kubernetes.io/projected/8ad0521a-780c-4483-8f36-e288ce7898b3-kube-api-access-6rggg\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.207709 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8ad0521a-780c-4483-8f36-e288ce7898b3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.207782 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8ad0521a-780c-4483-8f36-e288ce7898b3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.207857 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ad0521a-780c-4483-8f36-e288ce7898b3-bound-sa-token\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.208257 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ad0521a-780c-4483-8f36-e288ce7898b3-trusted-ca\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.208386 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8ad0521a-780c-4483-8f36-e288ce7898b3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.210209 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8ad0521a-780c-4483-8f36-e288ce7898b3-registry-certificates\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.214069 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8ad0521a-780c-4483-8f36-e288ce7898b3-registry-tls\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.215292 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8ad0521a-780c-4483-8f36-e288ce7898b3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.227431 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rggg\" (UniqueName: \"kubernetes.io/projected/8ad0521a-780c-4483-8f36-e288ce7898b3-kube-api-access-6rggg\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.230195 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ad0521a-780c-4483-8f36-e288ce7898b3-bound-sa-token\") pod \"image-registry-66df7c8f76-587t4\" (UID: \"8ad0521a-780c-4483-8f36-e288ce7898b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.334139 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.736024 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-587t4"] Jan 31 16:36:20 crc kubenswrapper[4730]: I0131 16:36:20.799584 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-587t4" event={"ID":"8ad0521a-780c-4483-8f36-e288ce7898b3","Type":"ContainerStarted","Data":"c7dfe2a4db0b1ba0b5768d25950d4513ee93e315b13b00d3249cda6703cb2d9a"} Jan 31 16:36:21 crc kubenswrapper[4730]: I0131 16:36:21.805899 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-587t4" event={"ID":"8ad0521a-780c-4483-8f36-e288ce7898b3","Type":"ContainerStarted","Data":"994cba95f4762bd515951b991a37801512354cd28f60f4e58fe2c738d97bf9cc"} Jan 31 16:36:21 crc kubenswrapper[4730]: I0131 16:36:21.806230 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:21 crc kubenswrapper[4730]: I0131 16:36:21.823270 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-587t4" podStartSLOduration=2.82325537 podStartE2EDuration="2.82325537s" podCreationTimestamp="2026-01-31 16:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:36:21.822697393 +0000 UTC m=+368.628754309" watchObservedRunningTime="2026-01-31 16:36:21.82325537 +0000 UTC m=+368.629312286" Jan 31 16:36:26 crc kubenswrapper[4730]: I0131 16:36:26.975561 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:36:26 crc kubenswrapper[4730]: I0131 16:36:26.975963 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:36:40 crc kubenswrapper[4730]: I0131 16:36:40.340606 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-587t4" Jan 31 16:36:40 crc kubenswrapper[4730]: I0131 16:36:40.403913 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z6ftx"] Jan 31 16:36:56 crc kubenswrapper[4730]: I0131 16:36:56.975856 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:36:56 crc kubenswrapper[4730]: I0131 16:36:56.976430 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:37:05 crc kubenswrapper[4730]: I0131 16:37:05.491416 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" podUID="0d504518-949c-45ca-8fc7-2f7e1d00f611" containerName="registry" containerID="cri-o://ea8f96ce435b034a98bfab043ce851d62f0b576553c746375c2c343ea7269cd8" gracePeriod=30 Jan 31 16:37:05 crc kubenswrapper[4730]: I0131 16:37:05.964909 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.001962 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d504518-949c-45ca-8fc7-2f7e1d00f611-registry-certificates\") pod \"0d504518-949c-45ca-8fc7-2f7e1d00f611\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.002389 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-registry-tls\") pod \"0d504518-949c-45ca-8fc7-2f7e1d00f611\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.002709 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"0d504518-949c-45ca-8fc7-2f7e1d00f611\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.002952 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d504518-949c-45ca-8fc7-2f7e1d00f611-trusted-ca\") pod \"0d504518-949c-45ca-8fc7-2f7e1d00f611\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.003120 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d504518-949c-45ca-8fc7-2f7e1d00f611-ca-trust-extracted\") pod \"0d504518-949c-45ca-8fc7-2f7e1d00f611\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.003421 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c8w4\" (UniqueName: \"kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-kube-api-access-7c8w4\") pod \"0d504518-949c-45ca-8fc7-2f7e1d00f611\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.003595 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-bound-sa-token\") pod \"0d504518-949c-45ca-8fc7-2f7e1d00f611\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.004310 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d504518-949c-45ca-8fc7-2f7e1d00f611-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "0d504518-949c-45ca-8fc7-2f7e1d00f611" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.004442 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d504518-949c-45ca-8fc7-2f7e1d00f611-installation-pull-secrets\") pod \"0d504518-949c-45ca-8fc7-2f7e1d00f611\" (UID: \"0d504518-949c-45ca-8fc7-2f7e1d00f611\") " Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.004998 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d504518-949c-45ca-8fc7-2f7e1d00f611-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "0d504518-949c-45ca-8fc7-2f7e1d00f611" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.005268 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d504518-949c-45ca-8fc7-2f7e1d00f611-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.005291 4730 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d504518-949c-45ca-8fc7-2f7e1d00f611-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.013163 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d504518-949c-45ca-8fc7-2f7e1d00f611-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "0d504518-949c-45ca-8fc7-2f7e1d00f611" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.014005 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "0d504518-949c-45ca-8fc7-2f7e1d00f611" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.015424 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "0d504518-949c-45ca-8fc7-2f7e1d00f611" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.019296 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-kube-api-access-7c8w4" (OuterVolumeSpecName: "kube-api-access-7c8w4") pod "0d504518-949c-45ca-8fc7-2f7e1d00f611" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611"). InnerVolumeSpecName "kube-api-access-7c8w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.027877 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "0d504518-949c-45ca-8fc7-2f7e1d00f611" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.031309 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d504518-949c-45ca-8fc7-2f7e1d00f611-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "0d504518-949c-45ca-8fc7-2f7e1d00f611" (UID: "0d504518-949c-45ca-8fc7-2f7e1d00f611"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.101901 4730 generic.go:334] "Generic (PLEG): container finished" podID="0d504518-949c-45ca-8fc7-2f7e1d00f611" containerID="ea8f96ce435b034a98bfab043ce851d62f0b576553c746375c2c343ea7269cd8" exitCode=0 Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.101975 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" event={"ID":"0d504518-949c-45ca-8fc7-2f7e1d00f611","Type":"ContainerDied","Data":"ea8f96ce435b034a98bfab043ce851d62f0b576553c746375c2c343ea7269cd8"} Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.102033 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" event={"ID":"0d504518-949c-45ca-8fc7-2f7e1d00f611","Type":"ContainerDied","Data":"e264a3e695444659a52ff79fe750e481e82e20561fecac66a4a38e4fda504e80"} Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.102048 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-z6ftx" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.102064 4730 scope.go:117] "RemoveContainer" containerID="ea8f96ce435b034a98bfab043ce851d62f0b576553c746375c2c343ea7269cd8" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.107697 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c8w4\" (UniqueName: \"kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-kube-api-access-7c8w4\") on node \"crc\" DevicePath \"\"" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.107747 4730 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.107769 4730 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d504518-949c-45ca-8fc7-2f7e1d00f611-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.108227 4730 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d504518-949c-45ca-8fc7-2f7e1d00f611-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.108245 4730 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d504518-949c-45ca-8fc7-2f7e1d00f611-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.127951 4730 scope.go:117] "RemoveContainer" containerID="ea8f96ce435b034a98bfab043ce851d62f0b576553c746375c2c343ea7269cd8" Jan 31 16:37:06 crc kubenswrapper[4730]: E0131 16:37:06.131160 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea8f96ce435b034a98bfab043ce851d62f0b576553c746375c2c343ea7269cd8\": container with ID starting with ea8f96ce435b034a98bfab043ce851d62f0b576553c746375c2c343ea7269cd8 not found: ID does not exist" containerID="ea8f96ce435b034a98bfab043ce851d62f0b576553c746375c2c343ea7269cd8" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.131331 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea8f96ce435b034a98bfab043ce851d62f0b576553c746375c2c343ea7269cd8"} err="failed to get container status \"ea8f96ce435b034a98bfab043ce851d62f0b576553c746375c2c343ea7269cd8\": rpc error: code = NotFound desc = could not find container \"ea8f96ce435b034a98bfab043ce851d62f0b576553c746375c2c343ea7269cd8\": container with ID starting with ea8f96ce435b034a98bfab043ce851d62f0b576553c746375c2c343ea7269cd8 not found: ID does not exist" Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.181857 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z6ftx"] Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.187972 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z6ftx"] Jan 31 16:37:06 crc kubenswrapper[4730]: I0131 16:37:06.476093 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d504518-949c-45ca-8fc7-2f7e1d00f611" path="/var/lib/kubelet/pods/0d504518-949c-45ca-8fc7-2f7e1d00f611/volumes" Jan 31 16:37:26 crc kubenswrapper[4730]: I0131 16:37:26.975591 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:37:26 crc kubenswrapper[4730]: I0131 16:37:26.976266 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:37:26 crc kubenswrapper[4730]: I0131 16:37:26.976330 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:37:26 crc kubenswrapper[4730]: I0131 16:37:26.977149 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b11c9a3a6b003984d5cc7b0769b316d6026aca4dc2bc56230ee6ace4c824f75"} pod="openshift-machine-config-operator/machine-config-daemon-mzg47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 16:37:26 crc kubenswrapper[4730]: I0131 16:37:26.977255 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" containerID="cri-o://9b11c9a3a6b003984d5cc7b0769b316d6026aca4dc2bc56230ee6ace4c824f75" gracePeriod=600 Jan 31 16:37:27 crc kubenswrapper[4730]: I0131 16:37:27.814011 4730 generic.go:334] "Generic (PLEG): container finished" podID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerID="9b11c9a3a6b003984d5cc7b0769b316d6026aca4dc2bc56230ee6ace4c824f75" exitCode=0 Jan 31 16:37:27 crc kubenswrapper[4730]: I0131 16:37:27.814088 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerDied","Data":"9b11c9a3a6b003984d5cc7b0769b316d6026aca4dc2bc56230ee6ace4c824f75"} Jan 31 16:37:27 crc kubenswrapper[4730]: I0131 16:37:27.814358 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerStarted","Data":"81c316c56ff641f78d1454bdb69055b2cc577488dee85bfffb222944d2c0456f"} Jan 31 16:37:27 crc kubenswrapper[4730]: I0131 16:37:27.814394 4730 scope.go:117] "RemoveContainer" containerID="50099d6d895b4365a0e6c0efb2255d81e6515356966ccc4e010d95323162b30c" Jan 31 16:39:29 crc kubenswrapper[4730]: I0131 16:39:29.965221 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9lhsp"] Jan 31 16:39:29 crc kubenswrapper[4730]: E0131 16:39:29.965838 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d504518-949c-45ca-8fc7-2f7e1d00f611" containerName="registry" Jan 31 16:39:29 crc kubenswrapper[4730]: I0131 16:39:29.965849 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d504518-949c-45ca-8fc7-2f7e1d00f611" containerName="registry" Jan 31 16:39:29 crc kubenswrapper[4730]: I0131 16:39:29.965942 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d504518-949c-45ca-8fc7-2f7e1d00f611" containerName="registry" Jan 31 16:39:29 crc kubenswrapper[4730]: I0131 16:39:29.966269 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9lhsp" Jan 31 16:39:29 crc kubenswrapper[4730]: I0131 16:39:29.969227 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 31 16:39:29 crc kubenswrapper[4730]: I0131 16:39:29.970378 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 31 16:39:29 crc kubenswrapper[4730]: I0131 16:39:29.980886 4730 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-kpnf9" Jan 31 16:39:29 crc kubenswrapper[4730]: I0131 16:39:29.983108 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-h45ph"] Jan 31 16:39:29 crc kubenswrapper[4730]: I0131 16:39:29.983835 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-h45ph" Jan 31 16:39:29 crc kubenswrapper[4730]: I0131 16:39:29.986499 4730 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zqfx8" Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.009023 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-fx65b"] Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.009849 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-fx65b" Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.011884 4730 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-vdpzp" Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.030398 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-h45ph"] Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.033287 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9lhsp"] Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.036069 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-fx65b"] Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.081307 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g59r\" (UniqueName: \"kubernetes.io/projected/fd8a2a6c-ec68-4905-a135-ee167753b731-kube-api-access-6g59r\") pod \"cert-manager-cainjector-cf98fcc89-9lhsp\" (UID: \"fd8a2a6c-ec68-4905-a135-ee167753b731\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9lhsp" Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.081548 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds7g6\" (UniqueName: \"kubernetes.io/projected/09f7f15d-b5e1-45b1-9f93-9bbd68805051-kube-api-access-ds7g6\") pod \"cert-manager-858654f9db-h45ph\" (UID: \"09f7f15d-b5e1-45b1-9f93-9bbd68805051\") " pod="cert-manager/cert-manager-858654f9db-h45ph" Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.182432 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds7g6\" (UniqueName: \"kubernetes.io/projected/09f7f15d-b5e1-45b1-9f93-9bbd68805051-kube-api-access-ds7g6\") pod \"cert-manager-858654f9db-h45ph\" (UID: \"09f7f15d-b5e1-45b1-9f93-9bbd68805051\") " pod="cert-manager/cert-manager-858654f9db-h45ph" Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.182581 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sql9b\" (UniqueName: \"kubernetes.io/projected/4208ba55-ea8a-4d6d-9618-8afcbf1216a2-kube-api-access-sql9b\") pod \"cert-manager-webhook-687f57d79b-fx65b\" (UID: \"4208ba55-ea8a-4d6d-9618-8afcbf1216a2\") " pod="cert-manager/cert-manager-webhook-687f57d79b-fx65b" Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.182630 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g59r\" (UniqueName: \"kubernetes.io/projected/fd8a2a6c-ec68-4905-a135-ee167753b731-kube-api-access-6g59r\") pod \"cert-manager-cainjector-cf98fcc89-9lhsp\" (UID: \"fd8a2a6c-ec68-4905-a135-ee167753b731\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9lhsp" Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.199723 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g59r\" (UniqueName: \"kubernetes.io/projected/fd8a2a6c-ec68-4905-a135-ee167753b731-kube-api-access-6g59r\") pod \"cert-manager-cainjector-cf98fcc89-9lhsp\" (UID: \"fd8a2a6c-ec68-4905-a135-ee167753b731\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9lhsp" Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.215488 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds7g6\" (UniqueName: \"kubernetes.io/projected/09f7f15d-b5e1-45b1-9f93-9bbd68805051-kube-api-access-ds7g6\") pod \"cert-manager-858654f9db-h45ph\" (UID: \"09f7f15d-b5e1-45b1-9f93-9bbd68805051\") " pod="cert-manager/cert-manager-858654f9db-h45ph" Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.278544 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9lhsp" Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.283308 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sql9b\" (UniqueName: \"kubernetes.io/projected/4208ba55-ea8a-4d6d-9618-8afcbf1216a2-kube-api-access-sql9b\") pod \"cert-manager-webhook-687f57d79b-fx65b\" (UID: \"4208ba55-ea8a-4d6d-9618-8afcbf1216a2\") " pod="cert-manager/cert-manager-webhook-687f57d79b-fx65b" Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.295579 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-h45ph" Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.312553 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sql9b\" (UniqueName: \"kubernetes.io/projected/4208ba55-ea8a-4d6d-9618-8afcbf1216a2-kube-api-access-sql9b\") pod \"cert-manager-webhook-687f57d79b-fx65b\" (UID: \"4208ba55-ea8a-4d6d-9618-8afcbf1216a2\") " pod="cert-manager/cert-manager-webhook-687f57d79b-fx65b" Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.321044 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-fx65b" Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.735760 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9lhsp"] Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.741053 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.779358 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-fx65b"] Jan 31 16:39:30 crc kubenswrapper[4730]: I0131 16:39:30.779641 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-h45ph"] Jan 31 16:39:31 crc kubenswrapper[4730]: I0131 16:39:31.641727 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-h45ph" event={"ID":"09f7f15d-b5e1-45b1-9f93-9bbd68805051","Type":"ContainerStarted","Data":"13ab5333898e7099b164aa189175cc2a2c7c0c3bb53a2301edfe36c3cfe2d218"} Jan 31 16:39:31 crc kubenswrapper[4730]: I0131 16:39:31.643031 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9lhsp" event={"ID":"fd8a2a6c-ec68-4905-a135-ee167753b731","Type":"ContainerStarted","Data":"40bbbafb95f0dedd52d169cb232c28921254cc0bf4cf41a9cbf98aceb4f0f241"} Jan 31 16:39:31 crc kubenswrapper[4730]: I0131 16:39:31.643835 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-fx65b" event={"ID":"4208ba55-ea8a-4d6d-9618-8afcbf1216a2","Type":"ContainerStarted","Data":"2ccf4f701a895d2d99117754a771ffcf3d971c037ff2be8b1c628a419272e8e1"} Jan 31 16:39:35 crc kubenswrapper[4730]: I0131 16:39:35.669558 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-h45ph" event={"ID":"09f7f15d-b5e1-45b1-9f93-9bbd68805051","Type":"ContainerStarted","Data":"bd1f51b82ae9bfc8dcd4a9101f093aaad6ea60570a73043437562713a4775723"} Jan 31 16:39:35 crc kubenswrapper[4730]: I0131 16:39:35.674235 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9lhsp" event={"ID":"fd8a2a6c-ec68-4905-a135-ee167753b731","Type":"ContainerStarted","Data":"fbaa4ee1733e071fb42def20d763406c53cfb19b67c30fe6a1be885badb2b61d"} Jan 31 16:39:35 crc kubenswrapper[4730]: I0131 16:39:35.677377 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-fx65b" event={"ID":"4208ba55-ea8a-4d6d-9618-8afcbf1216a2","Type":"ContainerStarted","Data":"8ae7143f6797504d8fe4ff3cf443725beda874160dfa5cd352861d862377288b"} Jan 31 16:39:35 crc kubenswrapper[4730]: I0131 16:39:35.677825 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-fx65b" Jan 31 16:39:35 crc kubenswrapper[4730]: I0131 16:39:35.693410 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-h45ph" podStartSLOduration=2.948796982 podStartE2EDuration="6.69338665s" podCreationTimestamp="2026-01-31 16:39:29 +0000 UTC" firstStartedPulling="2026-01-31 16:39:30.791363508 +0000 UTC m=+557.597420434" lastFinishedPulling="2026-01-31 16:39:34.535953186 +0000 UTC m=+561.342010102" observedRunningTime="2026-01-31 16:39:35.691590819 +0000 UTC m=+562.497647765" watchObservedRunningTime="2026-01-31 16:39:35.69338665 +0000 UTC m=+562.499443576" Jan 31 16:39:35 crc kubenswrapper[4730]: I0131 16:39:35.725559 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9lhsp" podStartSLOduration=2.87531536 podStartE2EDuration="6.725536821s" podCreationTimestamp="2026-01-31 16:39:29 +0000 UTC" firstStartedPulling="2026-01-31 16:39:30.740752174 +0000 UTC m=+557.546809100" lastFinishedPulling="2026-01-31 16:39:34.590973635 +0000 UTC m=+561.397030561" observedRunningTime="2026-01-31 16:39:35.723574355 +0000 UTC m=+562.529631281" watchObservedRunningTime="2026-01-31 16:39:35.725536821 +0000 UTC m=+562.531593747" Jan 31 16:39:35 crc kubenswrapper[4730]: I0131 16:39:35.746713 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-fx65b" podStartSLOduration=2.995842016 podStartE2EDuration="6.746696871s" podCreationTimestamp="2026-01-31 16:39:29 +0000 UTC" firstStartedPulling="2026-01-31 16:39:30.782317912 +0000 UTC m=+557.588374828" lastFinishedPulling="2026-01-31 16:39:34.533172767 +0000 UTC m=+561.339229683" observedRunningTime="2026-01-31 16:39:35.746218177 +0000 UTC m=+562.552275103" watchObservedRunningTime="2026-01-31 16:39:35.746696871 +0000 UTC m=+562.552753797" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.501073 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-25nsf"] Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.502052 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovn-controller" containerID="cri-o://b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35" gracePeriod=30 Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.502110 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="sbdb" containerID="cri-o://465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875" gracePeriod=30 Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.502221 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="nbdb" containerID="cri-o://393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad" gracePeriod=30 Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.502304 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="northd" containerID="cri-o://828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833" gracePeriod=30 Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.502398 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1" gracePeriod=30 Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.502441 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovn-acl-logging" containerID="cri-o://77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458" gracePeriod=30 Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.502495 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="kube-rbac-proxy-node" containerID="cri-o://e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655" gracePeriod=30 Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.569472 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovnkube-controller" containerID="cri-o://e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c" gracePeriod=30 Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.701782 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c8lpn_2d1c5cbc-307d-4556-b162-2c5c0103662d/kube-multus/2.log" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.702309 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c8lpn_2d1c5cbc-307d-4556-b162-2c5c0103662d/kube-multus/1.log" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.702371 4730 generic.go:334] "Generic (PLEG): container finished" podID="2d1c5cbc-307d-4556-b162-2c5c0103662d" containerID="45cc2c43568992c508493fd3172eb9663d13fb70f0aeb76f87274df206079158" exitCode=2 Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.702451 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c8lpn" event={"ID":"2d1c5cbc-307d-4556-b162-2c5c0103662d","Type":"ContainerDied","Data":"45cc2c43568992c508493fd3172eb9663d13fb70f0aeb76f87274df206079158"} Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.702496 4730 scope.go:117] "RemoveContainer" containerID="628a414aa58b365a660f8745dbacd5fa0ecb2f761e87cb4f6bf2c1b57cfef0f0" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.703044 4730 scope.go:117] "RemoveContainer" containerID="45cc2c43568992c508493fd3172eb9663d13fb70f0aeb76f87274df206079158" Jan 31 16:39:39 crc kubenswrapper[4730]: E0131 16:39:39.703388 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-c8lpn_openshift-multus(2d1c5cbc-307d-4556-b162-2c5c0103662d)\"" pod="openshift-multus/multus-c8lpn" podUID="2d1c5cbc-307d-4556-b162-2c5c0103662d" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.707357 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovnkube-controller/3.log" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.709381 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovn-acl-logging/0.log" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.709966 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovn-controller/0.log" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.710368 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerID="e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c" exitCode=0 Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.710389 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerID="c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1" exitCode=0 Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.710396 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerID="e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655" exitCode=0 Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.710405 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerID="77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458" exitCode=143 Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.710413 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerID="b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35" exitCode=143 Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.710432 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerDied","Data":"e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c"} Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.710457 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerDied","Data":"c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1"} Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.710468 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerDied","Data":"e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655"} Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.710476 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerDied","Data":"77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458"} Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.710484 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerDied","Data":"b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35"} Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.744358 4730 scope.go:117] "RemoveContainer" containerID="8bed56a61c245201de08d98693ab45f357b79e8b4be94158dc30d49dbb581731" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.855899 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovn-acl-logging/0.log" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.856439 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovn-controller/0.log" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.861870 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.910519 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-72gr6"] Jan 31 16:39:39 crc kubenswrapper[4730]: E0131 16:39:39.910925 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovnkube-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.910996 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovnkube-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: E0131 16:39:39.911066 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="nbdb" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.911131 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="nbdb" Jan 31 16:39:39 crc kubenswrapper[4730]: E0131 16:39:39.911187 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovnkube-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.911238 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovnkube-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: E0131 16:39:39.911283 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovnkube-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.911332 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovnkube-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: E0131 16:39:39.911395 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.911451 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 16:39:39 crc kubenswrapper[4730]: E0131 16:39:39.911499 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovnkube-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.911545 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovnkube-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: E0131 16:39:39.911597 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="kube-rbac-proxy-node" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.911645 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="kube-rbac-proxy-node" Jan 31 16:39:39 crc kubenswrapper[4730]: E0131 16:39:39.911707 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="kubecfg-setup" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.911763 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="kubecfg-setup" Jan 31 16:39:39 crc kubenswrapper[4730]: E0131 16:39:39.911840 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovn-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.911903 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovn-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: E0131 16:39:39.911985 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovn-acl-logging" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.912045 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovn-acl-logging" Jan 31 16:39:39 crc kubenswrapper[4730]: E0131 16:39:39.912098 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="northd" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.912150 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="northd" Jan 31 16:39:39 crc kubenswrapper[4730]: E0131 16:39:39.912196 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="sbdb" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.912242 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="sbdb" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.912384 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovnkube-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.912436 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.912498 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovnkube-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.912575 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="nbdb" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.912642 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovnkube-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.912705 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovnkube-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.912770 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovn-acl-logging" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.912847 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="sbdb" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.912912 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovn-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.912968 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="kube-rbac-proxy-node" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.913017 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="northd" Jan 31 16:39:39 crc kubenswrapper[4730]: E0131 16:39:39.913162 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovnkube-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.913221 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovnkube-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.913382 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerName="ovnkube-controller" Jan 31 16:39:39 crc kubenswrapper[4730]: I0131 16:39:39.915191 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.025844 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-ovn\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.025899 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovnkube-script-lib\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.025927 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-cni-bin\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.025951 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-etc-openvswitch\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.025984 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-run-netns\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026016 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-run-ovn-kubernetes\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026017 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026061 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-systemd-units\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026105 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026160 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-env-overrides\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026209 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlj7c\" (UniqueName: \"kubernetes.io/projected/8e53a6e0-ca28-4088-8ced-22ba134f316e-kube-api-access-mlj7c\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026257 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-kubelet\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026292 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-node-log\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026328 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-log-socket\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026358 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-systemd\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026391 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-cni-netd\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026419 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-openvswitch\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026464 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovnkube-config\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026473 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026497 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-var-lib-openvswitch\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026507 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026529 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026535 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-slash\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026550 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026572 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026569 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026610 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026618 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovn-node-metrics-cert\") pod \"8e53a6e0-ca28-4088-8ced-22ba134f316e\" (UID: \"8e53a6e0-ca28-4088-8ced-22ba134f316e\") " Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026874 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f6296bde-5ccc-422b-9839-a8098e38f7cd-ovnkube-script-lib\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026936 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-cni-bin\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026941 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.026994 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-cni-netd\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027003 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-slash" (OuterVolumeSpecName: "host-slash") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027029 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027051 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-var-lib-openvswitch\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027090 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-slash\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027128 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-log-socket\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027167 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-run-systemd\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027196 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6296bde-5ccc-422b-9839-a8098e38f7cd-env-overrides\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027235 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-run-ovn\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027276 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-etc-openvswitch\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027304 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-run-openvswitch\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027342 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-run-netns\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027058 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027309 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027332 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027336 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027360 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-node-log" (OuterVolumeSpecName: "node-log") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027383 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-log-socket" (OuterVolumeSpecName: "log-socket") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027562 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-node-log\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027748 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-kubelet\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027793 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027861 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6296bde-5ccc-422b-9839-a8098e38f7cd-ovn-node-metrics-cert\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027905 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6296bde-5ccc-422b-9839-a8098e38f7cd-ovnkube-config\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027938 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dbqk\" (UniqueName: \"kubernetes.io/projected/f6296bde-5ccc-422b-9839-a8098e38f7cd-kube-api-access-9dbqk\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027959 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-systemd-units\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.027999 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-run-ovn-kubernetes\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.028102 4730 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-log-socket\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.028122 4730 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.028135 4730 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.028147 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.028159 4730 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.028171 4730 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-slash\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.028184 4730 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.028197 4730 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.028211 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.028222 4730 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.028233 4730 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.028244 4730 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.028346 4730 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.028382 4730 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e53a6e0-ca28-4088-8ced-22ba134f316e-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.028405 4730 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.028430 4730 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.028452 4730 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-node-log\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.032523 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.032653 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e53a6e0-ca28-4088-8ced-22ba134f316e-kube-api-access-mlj7c" (OuterVolumeSpecName: "kube-api-access-mlj7c") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "kube-api-access-mlj7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.039379 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8e53a6e0-ca28-4088-8ced-22ba134f316e" (UID: "8e53a6e0-ca28-4088-8ced-22ba134f316e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129155 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-slash\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129200 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-log-socket\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129244 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-run-systemd\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129280 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6296bde-5ccc-422b-9839-a8098e38f7cd-env-overrides\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129295 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-slash\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129310 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-run-ovn\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129362 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-run-ovn\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129376 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-etc-openvswitch\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129406 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-etc-openvswitch\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129418 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-run-openvswitch\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129443 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-run-netns\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129467 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-log-socket\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129486 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-node-log\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129502 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-run-openvswitch\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129516 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-kubelet\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129538 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129569 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6296bde-5ccc-422b-9839-a8098e38f7cd-ovn-node-metrics-cert\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129595 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6296bde-5ccc-422b-9839-a8098e38f7cd-ovnkube-config\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129623 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-systemd-units\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129659 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dbqk\" (UniqueName: \"kubernetes.io/projected/f6296bde-5ccc-422b-9839-a8098e38f7cd-kube-api-access-9dbqk\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129684 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-run-ovn-kubernetes\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129713 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f6296bde-5ccc-422b-9839-a8098e38f7cd-ovnkube-script-lib\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129731 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-cni-bin\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129774 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-cni-netd\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129830 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-var-lib-openvswitch\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129872 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlj7c\" (UniqueName: \"kubernetes.io/projected/8e53a6e0-ca28-4088-8ced-22ba134f316e-kube-api-access-mlj7c\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129888 4730 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8e53a6e0-ca28-4088-8ced-22ba134f316e-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129901 4730 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8e53a6e0-ca28-4088-8ced-22ba134f316e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129935 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-var-lib-openvswitch\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129968 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-run-netns\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.129442 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-run-systemd\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.130027 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-node-log\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.130122 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-systemd-units\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.130176 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.130185 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6296bde-5ccc-422b-9839-a8098e38f7cd-env-overrides\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.130239 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-cni-netd\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.130218 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-cni-bin\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.130271 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-kubelet\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.130326 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f6296bde-5ccc-422b-9839-a8098e38f7cd-host-run-ovn-kubernetes\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.130986 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6296bde-5ccc-422b-9839-a8098e38f7cd-ovnkube-config\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.131221 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f6296bde-5ccc-422b-9839-a8098e38f7cd-ovnkube-script-lib\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.133307 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6296bde-5ccc-422b-9839-a8098e38f7cd-ovn-node-metrics-cert\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.153295 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dbqk\" (UniqueName: \"kubernetes.io/projected/f6296bde-5ccc-422b-9839-a8098e38f7cd-kube-api-access-9dbqk\") pod \"ovnkube-node-72gr6\" (UID: \"f6296bde-5ccc-422b-9839-a8098e38f7cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.227680 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.325071 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-fx65b" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.716482 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c8lpn_2d1c5cbc-307d-4556-b162-2c5c0103662d/kube-multus/2.log" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.718523 4730 generic.go:334] "Generic (PLEG): container finished" podID="f6296bde-5ccc-422b-9839-a8098e38f7cd" containerID="3e1179f7e868d3acf6f614041b0c5478b95f69f724d7a95bd561ef10e0e824d8" exitCode=0 Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.718552 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" event={"ID":"f6296bde-5ccc-422b-9839-a8098e38f7cd","Type":"ContainerDied","Data":"3e1179f7e868d3acf6f614041b0c5478b95f69f724d7a95bd561ef10e0e824d8"} Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.718735 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" event={"ID":"f6296bde-5ccc-422b-9839-a8098e38f7cd","Type":"ContainerStarted","Data":"3d5e4cb51becd5757f964d9717b755361b48b31ee6181d6768946be18b621e6b"} Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.729319 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovn-acl-logging/0.log" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.729716 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-25nsf_8e53a6e0-ca28-4088-8ced-22ba134f316e/ovn-controller/0.log" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.730044 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerID="465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875" exitCode=0 Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.730071 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerID="393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad" exitCode=0 Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.730080 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e53a6e0-ca28-4088-8ced-22ba134f316e" containerID="828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833" exitCode=0 Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.730085 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerDied","Data":"465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875"} Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.730133 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerDied","Data":"393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad"} Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.730144 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerDied","Data":"828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833"} Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.730154 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" event={"ID":"8e53a6e0-ca28-4088-8ced-22ba134f316e","Type":"ContainerDied","Data":"ede295a0e698071c578b7e237e2fb7363ca4e7760498d6e8ea8b7e35a3b563c7"} Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.730174 4730 scope.go:117] "RemoveContainer" containerID="e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.730173 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-25nsf" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.751884 4730 scope.go:117] "RemoveContainer" containerID="465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.768352 4730 scope.go:117] "RemoveContainer" containerID="393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.774191 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-25nsf"] Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.783736 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-25nsf"] Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.792603 4730 scope.go:117] "RemoveContainer" containerID="828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.814226 4730 scope.go:117] "RemoveContainer" containerID="c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.831600 4730 scope.go:117] "RemoveContainer" containerID="e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.843902 4730 scope.go:117] "RemoveContainer" containerID="77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.860552 4730 scope.go:117] "RemoveContainer" containerID="b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.882526 4730 scope.go:117] "RemoveContainer" containerID="399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.911877 4730 scope.go:117] "RemoveContainer" containerID="e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c" Jan 31 16:39:40 crc kubenswrapper[4730]: E0131 16:39:40.914284 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c\": container with ID starting with e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c not found: ID does not exist" containerID="e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.914325 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c"} err="failed to get container status \"e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c\": rpc error: code = NotFound desc = could not find container \"e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c\": container with ID starting with e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.914357 4730 scope.go:117] "RemoveContainer" containerID="465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875" Jan 31 16:39:40 crc kubenswrapper[4730]: E0131 16:39:40.923251 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\": container with ID starting with 465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875 not found: ID does not exist" containerID="465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.923485 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875"} err="failed to get container status \"465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\": rpc error: code = NotFound desc = could not find container \"465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\": container with ID starting with 465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.923514 4730 scope.go:117] "RemoveContainer" containerID="393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad" Jan 31 16:39:40 crc kubenswrapper[4730]: E0131 16:39:40.923754 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\": container with ID starting with 393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad not found: ID does not exist" containerID="393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.923781 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad"} err="failed to get container status \"393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\": rpc error: code = NotFound desc = could not find container \"393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\": container with ID starting with 393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.923814 4730 scope.go:117] "RemoveContainer" containerID="828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833" Jan 31 16:39:40 crc kubenswrapper[4730]: E0131 16:39:40.924104 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\": container with ID starting with 828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833 not found: ID does not exist" containerID="828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.924133 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833"} err="failed to get container status \"828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\": rpc error: code = NotFound desc = could not find container \"828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\": container with ID starting with 828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.924160 4730 scope.go:117] "RemoveContainer" containerID="c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1" Jan 31 16:39:40 crc kubenswrapper[4730]: E0131 16:39:40.924366 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\": container with ID starting with c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1 not found: ID does not exist" containerID="c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.924392 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1"} err="failed to get container status \"c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\": rpc error: code = NotFound desc = could not find container \"c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\": container with ID starting with c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.924408 4730 scope.go:117] "RemoveContainer" containerID="e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655" Jan 31 16:39:40 crc kubenswrapper[4730]: E0131 16:39:40.924811 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\": container with ID starting with e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655 not found: ID does not exist" containerID="e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.924838 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655"} err="failed to get container status \"e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\": rpc error: code = NotFound desc = could not find container \"e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\": container with ID starting with e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.924856 4730 scope.go:117] "RemoveContainer" containerID="77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458" Jan 31 16:39:40 crc kubenswrapper[4730]: E0131 16:39:40.925087 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\": container with ID starting with 77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458 not found: ID does not exist" containerID="77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.925114 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458"} err="failed to get container status \"77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\": rpc error: code = NotFound desc = could not find container \"77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\": container with ID starting with 77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.925130 4730 scope.go:117] "RemoveContainer" containerID="b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35" Jan 31 16:39:40 crc kubenswrapper[4730]: E0131 16:39:40.925324 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\": container with ID starting with b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35 not found: ID does not exist" containerID="b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.925355 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35"} err="failed to get container status \"b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\": rpc error: code = NotFound desc = could not find container \"b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\": container with ID starting with b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.925372 4730 scope.go:117] "RemoveContainer" containerID="399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81" Jan 31 16:39:40 crc kubenswrapper[4730]: E0131 16:39:40.925574 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\": container with ID starting with 399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81 not found: ID does not exist" containerID="399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.925603 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81"} err="failed to get container status \"399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\": rpc error: code = NotFound desc = could not find container \"399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\": container with ID starting with 399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.925619 4730 scope.go:117] "RemoveContainer" containerID="e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.925850 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c"} err="failed to get container status \"e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c\": rpc error: code = NotFound desc = could not find container \"e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c\": container with ID starting with e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.925874 4730 scope.go:117] "RemoveContainer" containerID="465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.926036 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875"} err="failed to get container status \"465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\": rpc error: code = NotFound desc = could not find container \"465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\": container with ID starting with 465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.926062 4730 scope.go:117] "RemoveContainer" containerID="393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.926249 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad"} err="failed to get container status \"393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\": rpc error: code = NotFound desc = could not find container \"393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\": container with ID starting with 393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.926274 4730 scope.go:117] "RemoveContainer" containerID="828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.926456 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833"} err="failed to get container status \"828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\": rpc error: code = NotFound desc = could not find container \"828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\": container with ID starting with 828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.926481 4730 scope.go:117] "RemoveContainer" containerID="c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.926654 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1"} err="failed to get container status \"c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\": rpc error: code = NotFound desc = could not find container \"c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\": container with ID starting with c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.926680 4730 scope.go:117] "RemoveContainer" containerID="e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.926992 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655"} err="failed to get container status \"e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\": rpc error: code = NotFound desc = could not find container \"e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\": container with ID starting with e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.927019 4730 scope.go:117] "RemoveContainer" containerID="77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.927185 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458"} err="failed to get container status \"77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\": rpc error: code = NotFound desc = could not find container \"77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\": container with ID starting with 77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.927210 4730 scope.go:117] "RemoveContainer" containerID="b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.927375 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35"} err="failed to get container status \"b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\": rpc error: code = NotFound desc = could not find container \"b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\": container with ID starting with b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.927399 4730 scope.go:117] "RemoveContainer" containerID="399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.927608 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81"} err="failed to get container status \"399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\": rpc error: code = NotFound desc = could not find container \"399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\": container with ID starting with 399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.927633 4730 scope.go:117] "RemoveContainer" containerID="e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.927837 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c"} err="failed to get container status \"e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c\": rpc error: code = NotFound desc = could not find container \"e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c\": container with ID starting with e5bebe8c43ea6519e7166d1e125e6134af1dce8646bc08bae81ecb37a88deb3c not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.927860 4730 scope.go:117] "RemoveContainer" containerID="465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.928070 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875"} err="failed to get container status \"465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\": rpc error: code = NotFound desc = could not find container \"465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875\": container with ID starting with 465427948df827d3ff2f4b5c3903209153c7dd405328ac84509886dd6c3c0875 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.928094 4730 scope.go:117] "RemoveContainer" containerID="393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.928264 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad"} err="failed to get container status \"393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\": rpc error: code = NotFound desc = could not find container \"393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad\": container with ID starting with 393d9b4981421a66bf8f136bedb4b8c130db1d144b8497a23095b4b902cebaad not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.928287 4730 scope.go:117] "RemoveContainer" containerID="828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.928434 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833"} err="failed to get container status \"828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\": rpc error: code = NotFound desc = could not find container \"828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833\": container with ID starting with 828498e7702f77aa2c8aa754d6ac4f3c402ea36cc3fdca6fce73bbd6b667d833 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.928455 4730 scope.go:117] "RemoveContainer" containerID="c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.928631 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1"} err="failed to get container status \"c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\": rpc error: code = NotFound desc = could not find container \"c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1\": container with ID starting with c5052efc826b2d1f6cfd50b0045e121e6b9526c14fc6ded9d9b83243049805b1 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.928653 4730 scope.go:117] "RemoveContainer" containerID="e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.928814 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655"} err="failed to get container status \"e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\": rpc error: code = NotFound desc = could not find container \"e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655\": container with ID starting with e86667ef197a6aa147b127d7d6c2eb7267727afe5c4e15b60169ebcf1e91c655 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.928835 4730 scope.go:117] "RemoveContainer" containerID="77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.928995 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458"} err="failed to get container status \"77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\": rpc error: code = NotFound desc = could not find container \"77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458\": container with ID starting with 77f5067bed2ca35c429ca95fabcea3c8a9eac93674559c4080975e189d9f8458 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.929052 4730 scope.go:117] "RemoveContainer" containerID="b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.929241 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35"} err="failed to get container status \"b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\": rpc error: code = NotFound desc = could not find container \"b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35\": container with ID starting with b0167aaf7687674ca319cf997cc026049da3506aeb7ef4bab46587598d6f0c35 not found: ID does not exist" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.929263 4730 scope.go:117] "RemoveContainer" containerID="399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81" Jan 31 16:39:40 crc kubenswrapper[4730]: I0131 16:39:40.929445 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81"} err="failed to get container status \"399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\": rpc error: code = NotFound desc = could not find container \"399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81\": container with ID starting with 399af45e80fe28bd26f419c0bf95d78825af89f8465a3598e726f2d8f26bff81 not found: ID does not exist" Jan 31 16:39:41 crc kubenswrapper[4730]: I0131 16:39:41.739547 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" event={"ID":"f6296bde-5ccc-422b-9839-a8098e38f7cd","Type":"ContainerStarted","Data":"9b826b2a0d5dd3dd27286b611fddbe3d9c3369dfe24a1b8e8633c3b18c54a66e"} Jan 31 16:39:41 crc kubenswrapper[4730]: I0131 16:39:41.739857 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" event={"ID":"f6296bde-5ccc-422b-9839-a8098e38f7cd","Type":"ContainerStarted","Data":"b370c2a6403dc00bbea846a7b01bd3c30586935f234bc186e7fec6a13e97efb1"} Jan 31 16:39:41 crc kubenswrapper[4730]: I0131 16:39:41.739870 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" event={"ID":"f6296bde-5ccc-422b-9839-a8098e38f7cd","Type":"ContainerStarted","Data":"8f9c1496d5680383a0f8c259d8e11d092550f71828024138e335ec7914d37fe6"} Jan 31 16:39:41 crc kubenswrapper[4730]: I0131 16:39:41.739881 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" event={"ID":"f6296bde-5ccc-422b-9839-a8098e38f7cd","Type":"ContainerStarted","Data":"b350ee9693e87aa87b45ed6b1cd1d0ebc9665cc68b460357420a7e7bfe988e5e"} Jan 31 16:39:41 crc kubenswrapper[4730]: I0131 16:39:41.739893 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" event={"ID":"f6296bde-5ccc-422b-9839-a8098e38f7cd","Type":"ContainerStarted","Data":"27745e875d6bf23cefb30c6db771c4ad6b29f2fbab0493fd90c32e5db80f62cb"} Jan 31 16:39:41 crc kubenswrapper[4730]: I0131 16:39:41.739904 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" event={"ID":"f6296bde-5ccc-422b-9839-a8098e38f7cd","Type":"ContainerStarted","Data":"824bcce0bc43b1bf8de20e6b6b49bdf30e8e9aed7e69082fd7e679a8e3eeb00b"} Jan 31 16:39:42 crc kubenswrapper[4730]: I0131 16:39:42.476300 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e53a6e0-ca28-4088-8ced-22ba134f316e" path="/var/lib/kubelet/pods/8e53a6e0-ca28-4088-8ced-22ba134f316e/volumes" Jan 31 16:39:43 crc kubenswrapper[4730]: I0131 16:39:43.751231 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" event={"ID":"f6296bde-5ccc-422b-9839-a8098e38f7cd","Type":"ContainerStarted","Data":"368791997ea75aa3ed1804bd0ab1ef3d3eaaf42d1cbe0a7f0225879731d70bf9"} Jan 31 16:39:45 crc kubenswrapper[4730]: I0131 16:39:45.765585 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" event={"ID":"f6296bde-5ccc-422b-9839-a8098e38f7cd","Type":"ContainerStarted","Data":"7fb804c98ee0be7fe52e45210b705c26e883577faf96584160642f6dc709d563"} Jan 31 16:39:45 crc kubenswrapper[4730]: I0131 16:39:45.766228 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:45 crc kubenswrapper[4730]: I0131 16:39:45.766318 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:45 crc kubenswrapper[4730]: I0131 16:39:45.790522 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" podStartSLOduration=6.790507695 podStartE2EDuration="6.790507695s" podCreationTimestamp="2026-01-31 16:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:39:45.786596304 +0000 UTC m=+572.592653210" watchObservedRunningTime="2026-01-31 16:39:45.790507695 +0000 UTC m=+572.596564611" Jan 31 16:39:45 crc kubenswrapper[4730]: I0131 16:39:45.795692 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:46 crc kubenswrapper[4730]: I0131 16:39:46.771059 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:46 crc kubenswrapper[4730]: I0131 16:39:46.799731 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:39:50 crc kubenswrapper[4730]: I0131 16:39:50.464736 4730 scope.go:117] "RemoveContainer" containerID="45cc2c43568992c508493fd3172eb9663d13fb70f0aeb76f87274df206079158" Jan 31 16:39:50 crc kubenswrapper[4730]: E0131 16:39:50.465372 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-c8lpn_openshift-multus(2d1c5cbc-307d-4556-b162-2c5c0103662d)\"" pod="openshift-multus/multus-c8lpn" podUID="2d1c5cbc-307d-4556-b162-2c5c0103662d" Jan 31 16:39:56 crc kubenswrapper[4730]: I0131 16:39:56.975454 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:39:56 crc kubenswrapper[4730]: I0131 16:39:56.975894 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:40:05 crc kubenswrapper[4730]: I0131 16:40:05.465991 4730 scope.go:117] "RemoveContainer" containerID="45cc2c43568992c508493fd3172eb9663d13fb70f0aeb76f87274df206079158" Jan 31 16:40:05 crc kubenswrapper[4730]: I0131 16:40:05.893895 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c8lpn_2d1c5cbc-307d-4556-b162-2c5c0103662d/kube-multus/2.log" Jan 31 16:40:05 crc kubenswrapper[4730]: I0131 16:40:05.893979 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c8lpn" event={"ID":"2d1c5cbc-307d-4556-b162-2c5c0103662d","Type":"ContainerStarted","Data":"3144c9726a9d6d62a957bbc2619d77f8b53a7986baa643117c1cb9792d4ad925"} Jan 31 16:40:10 crc kubenswrapper[4730]: I0131 16:40:10.259733 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-72gr6" Jan 31 16:40:18 crc kubenswrapper[4730]: I0131 16:40:18.690158 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt"] Jan 31 16:40:18 crc kubenswrapper[4730]: I0131 16:40:18.691826 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" Jan 31 16:40:18 crc kubenswrapper[4730]: I0131 16:40:18.694213 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 16:40:18 crc kubenswrapper[4730]: I0131 16:40:18.704247 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt"] Jan 31 16:40:18 crc kubenswrapper[4730]: I0131 16:40:18.782408 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mtwj\" (UniqueName: \"kubernetes.io/projected/92c0884c-e6df-47ef-9f9b-5b185db8ea98-kube-api-access-9mtwj\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt\" (UID: \"92c0884c-e6df-47ef-9f9b-5b185db8ea98\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" Jan 31 16:40:18 crc kubenswrapper[4730]: I0131 16:40:18.782453 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92c0884c-e6df-47ef-9f9b-5b185db8ea98-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt\" (UID: \"92c0884c-e6df-47ef-9f9b-5b185db8ea98\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" Jan 31 16:40:18 crc kubenswrapper[4730]: I0131 16:40:18.782491 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92c0884c-e6df-47ef-9f9b-5b185db8ea98-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt\" (UID: \"92c0884c-e6df-47ef-9f9b-5b185db8ea98\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" Jan 31 16:40:18 crc kubenswrapper[4730]: I0131 16:40:18.883607 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mtwj\" (UniqueName: \"kubernetes.io/projected/92c0884c-e6df-47ef-9f9b-5b185db8ea98-kube-api-access-9mtwj\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt\" (UID: \"92c0884c-e6df-47ef-9f9b-5b185db8ea98\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" Jan 31 16:40:18 crc kubenswrapper[4730]: I0131 16:40:18.883674 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92c0884c-e6df-47ef-9f9b-5b185db8ea98-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt\" (UID: \"92c0884c-e6df-47ef-9f9b-5b185db8ea98\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" Jan 31 16:40:18 crc kubenswrapper[4730]: I0131 16:40:18.883729 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92c0884c-e6df-47ef-9f9b-5b185db8ea98-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt\" (UID: \"92c0884c-e6df-47ef-9f9b-5b185db8ea98\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" Jan 31 16:40:18 crc kubenswrapper[4730]: I0131 16:40:18.884493 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92c0884c-e6df-47ef-9f9b-5b185db8ea98-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt\" (UID: \"92c0884c-e6df-47ef-9f9b-5b185db8ea98\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" Jan 31 16:40:18 crc kubenswrapper[4730]: I0131 16:40:18.884597 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92c0884c-e6df-47ef-9f9b-5b185db8ea98-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt\" (UID: \"92c0884c-e6df-47ef-9f9b-5b185db8ea98\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" Jan 31 16:40:18 crc kubenswrapper[4730]: I0131 16:40:18.906876 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mtwj\" (UniqueName: \"kubernetes.io/projected/92c0884c-e6df-47ef-9f9b-5b185db8ea98-kube-api-access-9mtwj\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt\" (UID: \"92c0884c-e6df-47ef-9f9b-5b185db8ea98\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" Jan 31 16:40:19 crc kubenswrapper[4730]: I0131 16:40:19.005243 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" Jan 31 16:40:19 crc kubenswrapper[4730]: I0131 16:40:19.254396 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt"] Jan 31 16:40:19 crc kubenswrapper[4730]: I0131 16:40:19.983908 4730 generic.go:334] "Generic (PLEG): container finished" podID="92c0884c-e6df-47ef-9f9b-5b185db8ea98" containerID="2569984762ce4d7da641cb0a1edd8054883a542063339bb3ce8015b034a1d8d8" exitCode=0 Jan 31 16:40:19 crc kubenswrapper[4730]: I0131 16:40:19.983982 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" event={"ID":"92c0884c-e6df-47ef-9f9b-5b185db8ea98","Type":"ContainerDied","Data":"2569984762ce4d7da641cb0a1edd8054883a542063339bb3ce8015b034a1d8d8"} Jan 31 16:40:19 crc kubenswrapper[4730]: I0131 16:40:19.984039 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" event={"ID":"92c0884c-e6df-47ef-9f9b-5b185db8ea98","Type":"ContainerStarted","Data":"eeec0c343eb4d10ff45c9c4aac2452605437a50c8fc7b0087b1230f4396b7461"} Jan 31 16:40:22 crc kubenswrapper[4730]: I0131 16:40:22.000417 4730 generic.go:334] "Generic (PLEG): container finished" podID="92c0884c-e6df-47ef-9f9b-5b185db8ea98" containerID="bf9a290272310a44c0c0a280d715fa333436be6a9cb57ce0544f89f2c427be7a" exitCode=0 Jan 31 16:40:22 crc kubenswrapper[4730]: I0131 16:40:22.000487 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" event={"ID":"92c0884c-e6df-47ef-9f9b-5b185db8ea98","Type":"ContainerDied","Data":"bf9a290272310a44c0c0a280d715fa333436be6a9cb57ce0544f89f2c427be7a"} Jan 31 16:40:23 crc kubenswrapper[4730]: I0131 16:40:23.012589 4730 generic.go:334] "Generic (PLEG): container finished" podID="92c0884c-e6df-47ef-9f9b-5b185db8ea98" containerID="c5cd5ecf3997b4aa31f823cd4933e2654e5d4f600460ff76e8a166af8a595be8" exitCode=0 Jan 31 16:40:23 crc kubenswrapper[4730]: I0131 16:40:23.012651 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" event={"ID":"92c0884c-e6df-47ef-9f9b-5b185db8ea98","Type":"ContainerDied","Data":"c5cd5ecf3997b4aa31f823cd4933e2654e5d4f600460ff76e8a166af8a595be8"} Jan 31 16:40:24 crc kubenswrapper[4730]: I0131 16:40:24.312772 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" Jan 31 16:40:24 crc kubenswrapper[4730]: I0131 16:40:24.461183 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92c0884c-e6df-47ef-9f9b-5b185db8ea98-util\") pod \"92c0884c-e6df-47ef-9f9b-5b185db8ea98\" (UID: \"92c0884c-e6df-47ef-9f9b-5b185db8ea98\") " Jan 31 16:40:24 crc kubenswrapper[4730]: I0131 16:40:24.461405 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mtwj\" (UniqueName: \"kubernetes.io/projected/92c0884c-e6df-47ef-9f9b-5b185db8ea98-kube-api-access-9mtwj\") pod \"92c0884c-e6df-47ef-9f9b-5b185db8ea98\" (UID: \"92c0884c-e6df-47ef-9f9b-5b185db8ea98\") " Jan 31 16:40:24 crc kubenswrapper[4730]: I0131 16:40:24.461483 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92c0884c-e6df-47ef-9f9b-5b185db8ea98-bundle\") pod \"92c0884c-e6df-47ef-9f9b-5b185db8ea98\" (UID: \"92c0884c-e6df-47ef-9f9b-5b185db8ea98\") " Jan 31 16:40:24 crc kubenswrapper[4730]: I0131 16:40:24.463257 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92c0884c-e6df-47ef-9f9b-5b185db8ea98-bundle" (OuterVolumeSpecName: "bundle") pod "92c0884c-e6df-47ef-9f9b-5b185db8ea98" (UID: "92c0884c-e6df-47ef-9f9b-5b185db8ea98"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:40:24 crc kubenswrapper[4730]: I0131 16:40:24.470129 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c0884c-e6df-47ef-9f9b-5b185db8ea98-kube-api-access-9mtwj" (OuterVolumeSpecName: "kube-api-access-9mtwj") pod "92c0884c-e6df-47ef-9f9b-5b185db8ea98" (UID: "92c0884c-e6df-47ef-9f9b-5b185db8ea98"). InnerVolumeSpecName "kube-api-access-9mtwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:40:24 crc kubenswrapper[4730]: I0131 16:40:24.497158 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92c0884c-e6df-47ef-9f9b-5b185db8ea98-util" (OuterVolumeSpecName: "util") pod "92c0884c-e6df-47ef-9f9b-5b185db8ea98" (UID: "92c0884c-e6df-47ef-9f9b-5b185db8ea98"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:40:24 crc kubenswrapper[4730]: I0131 16:40:24.563487 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mtwj\" (UniqueName: \"kubernetes.io/projected/92c0884c-e6df-47ef-9f9b-5b185db8ea98-kube-api-access-9mtwj\") on node \"crc\" DevicePath \"\"" Jan 31 16:40:24 crc kubenswrapper[4730]: I0131 16:40:24.563537 4730 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92c0884c-e6df-47ef-9f9b-5b185db8ea98-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:40:24 crc kubenswrapper[4730]: I0131 16:40:24.563555 4730 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92c0884c-e6df-47ef-9f9b-5b185db8ea98-util\") on node \"crc\" DevicePath \"\"" Jan 31 16:40:25 crc kubenswrapper[4730]: I0131 16:40:25.028220 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" event={"ID":"92c0884c-e6df-47ef-9f9b-5b185db8ea98","Type":"ContainerDied","Data":"eeec0c343eb4d10ff45c9c4aac2452605437a50c8fc7b0087b1230f4396b7461"} Jan 31 16:40:25 crc kubenswrapper[4730]: I0131 16:40:25.028278 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeec0c343eb4d10ff45c9c4aac2452605437a50c8fc7b0087b1230f4396b7461" Jan 31 16:40:25 crc kubenswrapper[4730]: I0131 16:40:25.028364 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt" Jan 31 16:40:26 crc kubenswrapper[4730]: I0131 16:40:26.228521 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-4zz5r"] Jan 31 16:40:26 crc kubenswrapper[4730]: E0131 16:40:26.228701 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c0884c-e6df-47ef-9f9b-5b185db8ea98" containerName="pull" Jan 31 16:40:26 crc kubenswrapper[4730]: I0131 16:40:26.228712 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c0884c-e6df-47ef-9f9b-5b185db8ea98" containerName="pull" Jan 31 16:40:26 crc kubenswrapper[4730]: E0131 16:40:26.228728 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c0884c-e6df-47ef-9f9b-5b185db8ea98" containerName="extract" Jan 31 16:40:26 crc kubenswrapper[4730]: I0131 16:40:26.228733 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c0884c-e6df-47ef-9f9b-5b185db8ea98" containerName="extract" Jan 31 16:40:26 crc kubenswrapper[4730]: E0131 16:40:26.228741 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c0884c-e6df-47ef-9f9b-5b185db8ea98" containerName="util" Jan 31 16:40:26 crc kubenswrapper[4730]: I0131 16:40:26.228747 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c0884c-e6df-47ef-9f9b-5b185db8ea98" containerName="util" Jan 31 16:40:26 crc kubenswrapper[4730]: I0131 16:40:26.228861 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c0884c-e6df-47ef-9f9b-5b185db8ea98" containerName="extract" Jan 31 16:40:26 crc kubenswrapper[4730]: I0131 16:40:26.229194 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-4zz5r" Jan 31 16:40:26 crc kubenswrapper[4730]: I0131 16:40:26.231278 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 31 16:40:26 crc kubenswrapper[4730]: I0131 16:40:26.231537 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tsx5s" Jan 31 16:40:26 crc kubenswrapper[4730]: I0131 16:40:26.233506 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 31 16:40:26 crc kubenswrapper[4730]: I0131 16:40:26.243846 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-4zz5r"] Jan 31 16:40:26 crc kubenswrapper[4730]: I0131 16:40:26.390614 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbh29\" (UniqueName: \"kubernetes.io/projected/7545d2e0-52ef-41a7-a0be-3c97df2f4fd8-kube-api-access-pbh29\") pod \"nmstate-operator-646758c888-4zz5r\" (UID: \"7545d2e0-52ef-41a7-a0be-3c97df2f4fd8\") " pod="openshift-nmstate/nmstate-operator-646758c888-4zz5r" Jan 31 16:40:26 crc kubenswrapper[4730]: I0131 16:40:26.492336 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbh29\" (UniqueName: \"kubernetes.io/projected/7545d2e0-52ef-41a7-a0be-3c97df2f4fd8-kube-api-access-pbh29\") pod \"nmstate-operator-646758c888-4zz5r\" (UID: \"7545d2e0-52ef-41a7-a0be-3c97df2f4fd8\") " pod="openshift-nmstate/nmstate-operator-646758c888-4zz5r" Jan 31 16:40:26 crc kubenswrapper[4730]: I0131 16:40:26.512636 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbh29\" (UniqueName: \"kubernetes.io/projected/7545d2e0-52ef-41a7-a0be-3c97df2f4fd8-kube-api-access-pbh29\") pod \"nmstate-operator-646758c888-4zz5r\" (UID: \"7545d2e0-52ef-41a7-a0be-3c97df2f4fd8\") " pod="openshift-nmstate/nmstate-operator-646758c888-4zz5r" Jan 31 16:40:26 crc kubenswrapper[4730]: I0131 16:40:26.544778 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-4zz5r" Jan 31 16:40:26 crc kubenswrapper[4730]: I0131 16:40:26.940315 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-4zz5r"] Jan 31 16:40:26 crc kubenswrapper[4730]: W0131 16:40:26.947971 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7545d2e0_52ef_41a7_a0be_3c97df2f4fd8.slice/crio-08030792e3f518f0d3833c09912a04f8bc02a8c4f4fdb0a3b37df8c41c6f028f WatchSource:0}: Error finding container 08030792e3f518f0d3833c09912a04f8bc02a8c4f4fdb0a3b37df8c41c6f028f: Status 404 returned error can't find the container with id 08030792e3f518f0d3833c09912a04f8bc02a8c4f4fdb0a3b37df8c41c6f028f Jan 31 16:40:26 crc kubenswrapper[4730]: I0131 16:40:26.975474 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:40:26 crc kubenswrapper[4730]: I0131 16:40:26.975530 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:40:27 crc kubenswrapper[4730]: I0131 16:40:27.039445 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-4zz5r" event={"ID":"7545d2e0-52ef-41a7-a0be-3c97df2f4fd8","Type":"ContainerStarted","Data":"08030792e3f518f0d3833c09912a04f8bc02a8c4f4fdb0a3b37df8c41c6f028f"} Jan 31 16:40:30 crc kubenswrapper[4730]: I0131 16:40:30.073943 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-4zz5r" event={"ID":"7545d2e0-52ef-41a7-a0be-3c97df2f4fd8","Type":"ContainerStarted","Data":"3bfa02d6ba0fed62574a0795de88f169d67296d465a4de0f6d004854e72771ef"} Jan 31 16:40:30 crc kubenswrapper[4730]: I0131 16:40:30.109419 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-4zz5r" podStartSLOduration=2.018549561 podStartE2EDuration="4.109390738s" podCreationTimestamp="2026-01-31 16:40:26 +0000 UTC" firstStartedPulling="2026-01-31 16:40:26.950146076 +0000 UTC m=+613.756203002" lastFinishedPulling="2026-01-31 16:40:29.040987263 +0000 UTC m=+615.847044179" observedRunningTime="2026-01-31 16:40:30.098103641 +0000 UTC m=+616.904160567" watchObservedRunningTime="2026-01-31 16:40:30.109390738 +0000 UTC m=+616.915447694" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.095825 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-sk79b"] Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.096843 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-sk79b" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.100969 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-b5xp5" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.109485 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-lh2fv"] Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.110135 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lh2fv" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.111769 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.124755 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-sk79b"] Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.139997 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-lh2fv"] Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.150521 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-fjff6"] Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.151324 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fjff6" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.158352 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp6hj\" (UniqueName: \"kubernetes.io/projected/487cafab-d04e-41a9-8f02-fde62acc89d9-kube-api-access-kp6hj\") pod \"nmstate-webhook-8474b5b9d8-lh2fv\" (UID: \"487cafab-d04e-41a9-8f02-fde62acc89d9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lh2fv" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.158447 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcdz9\" (UniqueName: \"kubernetes.io/projected/1e9f7b4c-83b7-465f-b684-8131c5e63277-kube-api-access-bcdz9\") pod \"nmstate-metrics-54757c584b-sk79b\" (UID: \"1e9f7b4c-83b7-465f-b684-8131c5e63277\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-sk79b" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.158479 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2126b9cb-bf66-467f-8f34-400ea7d780ee-ovs-socket\") pod \"nmstate-handler-fjff6\" (UID: \"2126b9cb-bf66-467f-8f34-400ea7d780ee\") " pod="openshift-nmstate/nmstate-handler-fjff6" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.158510 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2126b9cb-bf66-467f-8f34-400ea7d780ee-nmstate-lock\") pod \"nmstate-handler-fjff6\" (UID: \"2126b9cb-bf66-467f-8f34-400ea7d780ee\") " pod="openshift-nmstate/nmstate-handler-fjff6" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.158530 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/487cafab-d04e-41a9-8f02-fde62acc89d9-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-lh2fv\" (UID: \"487cafab-d04e-41a9-8f02-fde62acc89d9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lh2fv" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.158552 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2126b9cb-bf66-467f-8f34-400ea7d780ee-dbus-socket\") pod \"nmstate-handler-fjff6\" (UID: \"2126b9cb-bf66-467f-8f34-400ea7d780ee\") " pod="openshift-nmstate/nmstate-handler-fjff6" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.158569 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpk2c\" (UniqueName: \"kubernetes.io/projected/2126b9cb-bf66-467f-8f34-400ea7d780ee-kube-api-access-lpk2c\") pod \"nmstate-handler-fjff6\" (UID: \"2126b9cb-bf66-467f-8f34-400ea7d780ee\") " pod="openshift-nmstate/nmstate-handler-fjff6" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.254378 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8"] Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.254985 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.257403 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.257557 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.257759 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-4rl8g" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.259237 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2126b9cb-bf66-467f-8f34-400ea7d780ee-dbus-socket\") pod \"nmstate-handler-fjff6\" (UID: \"2126b9cb-bf66-467f-8f34-400ea7d780ee\") " pod="openshift-nmstate/nmstate-handler-fjff6" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.259267 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpk2c\" (UniqueName: \"kubernetes.io/projected/2126b9cb-bf66-467f-8f34-400ea7d780ee-kube-api-access-lpk2c\") pod \"nmstate-handler-fjff6\" (UID: \"2126b9cb-bf66-467f-8f34-400ea7d780ee\") " pod="openshift-nmstate/nmstate-handler-fjff6" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.259291 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjx6k\" (UniqueName: \"kubernetes.io/projected/485682da-cdf9-4bb1-ad07-06ed4ac7ff92-kube-api-access-qjx6k\") pod \"nmstate-console-plugin-7754f76f8b-p6fl8\" (UID: \"485682da-cdf9-4bb1-ad07-06ed4ac7ff92\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.259312 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp6hj\" (UniqueName: \"kubernetes.io/projected/487cafab-d04e-41a9-8f02-fde62acc89d9-kube-api-access-kp6hj\") pod \"nmstate-webhook-8474b5b9d8-lh2fv\" (UID: \"487cafab-d04e-41a9-8f02-fde62acc89d9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lh2fv" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.259327 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/485682da-cdf9-4bb1-ad07-06ed4ac7ff92-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-p6fl8\" (UID: \"485682da-cdf9-4bb1-ad07-06ed4ac7ff92\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.259441 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcdz9\" (UniqueName: \"kubernetes.io/projected/1e9f7b4c-83b7-465f-b684-8131c5e63277-kube-api-access-bcdz9\") pod \"nmstate-metrics-54757c584b-sk79b\" (UID: \"1e9f7b4c-83b7-465f-b684-8131c5e63277\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-sk79b" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.259505 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2126b9cb-bf66-467f-8f34-400ea7d780ee-ovs-socket\") pod \"nmstate-handler-fjff6\" (UID: \"2126b9cb-bf66-467f-8f34-400ea7d780ee\") " pod="openshift-nmstate/nmstate-handler-fjff6" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.259545 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/485682da-cdf9-4bb1-ad07-06ed4ac7ff92-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-p6fl8\" (UID: \"485682da-cdf9-4bb1-ad07-06ed4ac7ff92\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.259563 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2126b9cb-bf66-467f-8f34-400ea7d780ee-nmstate-lock\") pod \"nmstate-handler-fjff6\" (UID: \"2126b9cb-bf66-467f-8f34-400ea7d780ee\") " pod="openshift-nmstate/nmstate-handler-fjff6" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.259581 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/487cafab-d04e-41a9-8f02-fde62acc89d9-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-lh2fv\" (UID: \"487cafab-d04e-41a9-8f02-fde62acc89d9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lh2fv" Jan 31 16:40:31 crc kubenswrapper[4730]: E0131 16:40:31.259668 4730 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 31 16:40:31 crc kubenswrapper[4730]: E0131 16:40:31.259705 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/487cafab-d04e-41a9-8f02-fde62acc89d9-tls-key-pair podName:487cafab-d04e-41a9-8f02-fde62acc89d9 nodeName:}" failed. No retries permitted until 2026-01-31 16:40:31.759691417 +0000 UTC m=+618.565748333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/487cafab-d04e-41a9-8f02-fde62acc89d9-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-lh2fv" (UID: "487cafab-d04e-41a9-8f02-fde62acc89d9") : secret "openshift-nmstate-webhook" not found Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.259820 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2126b9cb-bf66-467f-8f34-400ea7d780ee-dbus-socket\") pod \"nmstate-handler-fjff6\" (UID: \"2126b9cb-bf66-467f-8f34-400ea7d780ee\") " pod="openshift-nmstate/nmstate-handler-fjff6" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.259914 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2126b9cb-bf66-467f-8f34-400ea7d780ee-ovs-socket\") pod \"nmstate-handler-fjff6\" (UID: \"2126b9cb-bf66-467f-8f34-400ea7d780ee\") " pod="openshift-nmstate/nmstate-handler-fjff6" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.259949 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2126b9cb-bf66-467f-8f34-400ea7d780ee-nmstate-lock\") pod \"nmstate-handler-fjff6\" (UID: \"2126b9cb-bf66-467f-8f34-400ea7d780ee\") " pod="openshift-nmstate/nmstate-handler-fjff6" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.272095 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8"] Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.281464 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpk2c\" (UniqueName: \"kubernetes.io/projected/2126b9cb-bf66-467f-8f34-400ea7d780ee-kube-api-access-lpk2c\") pod \"nmstate-handler-fjff6\" (UID: \"2126b9cb-bf66-467f-8f34-400ea7d780ee\") " pod="openshift-nmstate/nmstate-handler-fjff6" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.283965 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp6hj\" (UniqueName: \"kubernetes.io/projected/487cafab-d04e-41a9-8f02-fde62acc89d9-kube-api-access-kp6hj\") pod \"nmstate-webhook-8474b5b9d8-lh2fv\" (UID: \"487cafab-d04e-41a9-8f02-fde62acc89d9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lh2fv" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.285517 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcdz9\" (UniqueName: \"kubernetes.io/projected/1e9f7b4c-83b7-465f-b684-8131c5e63277-kube-api-access-bcdz9\") pod \"nmstate-metrics-54757c584b-sk79b\" (UID: \"1e9f7b4c-83b7-465f-b684-8131c5e63277\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-sk79b" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.360004 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjx6k\" (UniqueName: \"kubernetes.io/projected/485682da-cdf9-4bb1-ad07-06ed4ac7ff92-kube-api-access-qjx6k\") pod \"nmstate-console-plugin-7754f76f8b-p6fl8\" (UID: \"485682da-cdf9-4bb1-ad07-06ed4ac7ff92\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.360046 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/485682da-cdf9-4bb1-ad07-06ed4ac7ff92-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-p6fl8\" (UID: \"485682da-cdf9-4bb1-ad07-06ed4ac7ff92\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.360105 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/485682da-cdf9-4bb1-ad07-06ed4ac7ff92-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-p6fl8\" (UID: \"485682da-cdf9-4bb1-ad07-06ed4ac7ff92\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8" Jan 31 16:40:31 crc kubenswrapper[4730]: E0131 16:40:31.360256 4730 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 31 16:40:31 crc kubenswrapper[4730]: E0131 16:40:31.360342 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/485682da-cdf9-4bb1-ad07-06ed4ac7ff92-plugin-serving-cert podName:485682da-cdf9-4bb1-ad07-06ed4ac7ff92 nodeName:}" failed. No retries permitted until 2026-01-31 16:40:31.860319604 +0000 UTC m=+618.666376520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/485682da-cdf9-4bb1-ad07-06ed4ac7ff92-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-p6fl8" (UID: "485682da-cdf9-4bb1-ad07-06ed4ac7ff92") : secret "plugin-serving-cert" not found Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.361058 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/485682da-cdf9-4bb1-ad07-06ed4ac7ff92-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-p6fl8\" (UID: \"485682da-cdf9-4bb1-ad07-06ed4ac7ff92\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.385348 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjx6k\" (UniqueName: \"kubernetes.io/projected/485682da-cdf9-4bb1-ad07-06ed4ac7ff92-kube-api-access-qjx6k\") pod \"nmstate-console-plugin-7754f76f8b-p6fl8\" (UID: \"485682da-cdf9-4bb1-ad07-06ed4ac7ff92\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.435950 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-sk79b" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.467929 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fjff6" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.520466 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-ff94ddbd5-9wlth"] Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.521150 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.558697 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-ff94ddbd5-9wlth"] Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.670028 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-console-config\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.670068 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv7h9\" (UniqueName: \"kubernetes.io/projected/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-kube-api-access-dv7h9\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.670094 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-trusted-ca-bundle\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.670375 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-console-serving-cert\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.670493 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-console-oauth-config\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.670606 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-oauth-serving-cert\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.670678 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-service-ca\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: W0131 16:40:31.696278 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e9f7b4c_83b7_465f_b684_8131c5e63277.slice/crio-892123fac0a7879afc5a4d0be51a1752263a38ec968f73176dc49139d58b7968 WatchSource:0}: Error finding container 892123fac0a7879afc5a4d0be51a1752263a38ec968f73176dc49139d58b7968: Status 404 returned error can't find the container with id 892123fac0a7879afc5a4d0be51a1752263a38ec968f73176dc49139d58b7968 Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.699313 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-sk79b"] Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.771333 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-console-serving-cert\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.771436 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/487cafab-d04e-41a9-8f02-fde62acc89d9-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-lh2fv\" (UID: \"487cafab-d04e-41a9-8f02-fde62acc89d9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lh2fv" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.771465 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-console-oauth-config\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.771524 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-oauth-serving-cert\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.771565 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-service-ca\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.771607 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-console-config\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.771646 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv7h9\" (UniqueName: \"kubernetes.io/projected/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-kube-api-access-dv7h9\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.771687 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-trusted-ca-bundle\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.772842 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-oauth-serving-cert\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.775287 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-console-config\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.775513 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-service-ca\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.776118 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-trusted-ca-bundle\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.777659 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-console-oauth-config\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.779016 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/487cafab-d04e-41a9-8f02-fde62acc89d9-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-lh2fv\" (UID: \"487cafab-d04e-41a9-8f02-fde62acc89d9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lh2fv" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.780834 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-console-serving-cert\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.790315 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv7h9\" (UniqueName: \"kubernetes.io/projected/c1bf64c4-66e8-44d9-8e52-26fd0fadcc93-kube-api-access-dv7h9\") pod \"console-ff94ddbd5-9wlth\" (UID: \"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93\") " pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.870663 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.872855 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/485682da-cdf9-4bb1-ad07-06ed4ac7ff92-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-p6fl8\" (UID: \"485682da-cdf9-4bb1-ad07-06ed4ac7ff92\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8" Jan 31 16:40:31 crc kubenswrapper[4730]: I0131 16:40:31.878351 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/485682da-cdf9-4bb1-ad07-06ed4ac7ff92-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-p6fl8\" (UID: \"485682da-cdf9-4bb1-ad07-06ed4ac7ff92\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8" Jan 31 16:40:32 crc kubenswrapper[4730]: I0131 16:40:32.040179 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lh2fv" Jan 31 16:40:32 crc kubenswrapper[4730]: I0131 16:40:32.088404 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-ff94ddbd5-9wlth"] Jan 31 16:40:32 crc kubenswrapper[4730]: W0131 16:40:32.096077 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1bf64c4_66e8_44d9_8e52_26fd0fadcc93.slice/crio-9ecdf33685a16037e09c81fbffcf889e82beb3398700b6eea39eeaec38489737 WatchSource:0}: Error finding container 9ecdf33685a16037e09c81fbffcf889e82beb3398700b6eea39eeaec38489737: Status 404 returned error can't find the container with id 9ecdf33685a16037e09c81fbffcf889e82beb3398700b6eea39eeaec38489737 Jan 31 16:40:32 crc kubenswrapper[4730]: I0131 16:40:32.096897 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fjff6" event={"ID":"2126b9cb-bf66-467f-8f34-400ea7d780ee","Type":"ContainerStarted","Data":"c1815dd3e5ac86585bbeba87e162f325336183cdee83284d03a8665b0347e3e2"} Jan 31 16:40:32 crc kubenswrapper[4730]: I0131 16:40:32.098363 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-sk79b" event={"ID":"1e9f7b4c-83b7-465f-b684-8131c5e63277","Type":"ContainerStarted","Data":"892123fac0a7879afc5a4d0be51a1752263a38ec968f73176dc49139d58b7968"} Jan 31 16:40:32 crc kubenswrapper[4730]: I0131 16:40:32.174843 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8" Jan 31 16:40:32 crc kubenswrapper[4730]: I0131 16:40:32.246305 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-lh2fv"] Jan 31 16:40:32 crc kubenswrapper[4730]: W0131 16:40:32.253997 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487cafab_d04e_41a9_8f02_fde62acc89d9.slice/crio-0577273aea34653729472e5d0336f2609820f7514eda80d7e62ba293ca6e74d6 WatchSource:0}: Error finding container 0577273aea34653729472e5d0336f2609820f7514eda80d7e62ba293ca6e74d6: Status 404 returned error can't find the container with id 0577273aea34653729472e5d0336f2609820f7514eda80d7e62ba293ca6e74d6 Jan 31 16:40:32 crc kubenswrapper[4730]: I0131 16:40:32.388222 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8"] Jan 31 16:40:33 crc kubenswrapper[4730]: I0131 16:40:33.105726 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-ff94ddbd5-9wlth" event={"ID":"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93","Type":"ContainerStarted","Data":"e4a3fe628a82469dd29a47df707a94af028f86ca1e703f4b095c079f0024aa67"} Jan 31 16:40:33 crc kubenswrapper[4730]: I0131 16:40:33.106151 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-ff94ddbd5-9wlth" event={"ID":"c1bf64c4-66e8-44d9-8e52-26fd0fadcc93","Type":"ContainerStarted","Data":"9ecdf33685a16037e09c81fbffcf889e82beb3398700b6eea39eeaec38489737"} Jan 31 16:40:33 crc kubenswrapper[4730]: I0131 16:40:33.107911 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8" event={"ID":"485682da-cdf9-4bb1-ad07-06ed4ac7ff92","Type":"ContainerStarted","Data":"063864e1c5cde3c9c26d31fab39dbd38763356ec76cd4923cf5a95b6181795af"} Jan 31 16:40:33 crc kubenswrapper[4730]: I0131 16:40:33.109764 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lh2fv" event={"ID":"487cafab-d04e-41a9-8f02-fde62acc89d9","Type":"ContainerStarted","Data":"0577273aea34653729472e5d0336f2609820f7514eda80d7e62ba293ca6e74d6"} Jan 31 16:40:33 crc kubenswrapper[4730]: I0131 16:40:33.126138 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-ff94ddbd5-9wlth" podStartSLOduration=2.126124379 podStartE2EDuration="2.126124379s" podCreationTimestamp="2026-01-31 16:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:40:33.122450982 +0000 UTC m=+619.928507898" watchObservedRunningTime="2026-01-31 16:40:33.126124379 +0000 UTC m=+619.932181295" Jan 31 16:40:35 crc kubenswrapper[4730]: I0131 16:40:35.123992 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-sk79b" event={"ID":"1e9f7b4c-83b7-465f-b684-8131c5e63277","Type":"ContainerStarted","Data":"06666a799a59ef5f44e41262ddea1e4548627e1d4ebe93e804543bca229191c5"} Jan 31 16:40:35 crc kubenswrapper[4730]: I0131 16:40:35.125743 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lh2fv" event={"ID":"487cafab-d04e-41a9-8f02-fde62acc89d9","Type":"ContainerStarted","Data":"fc770fe442ddc68b24c593de8851723374af0f441b6684a6de525a527d9618a3"} Jan 31 16:40:35 crc kubenswrapper[4730]: I0131 16:40:35.126007 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lh2fv" Jan 31 16:40:35 crc kubenswrapper[4730]: I0131 16:40:35.128507 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fjff6" event={"ID":"2126b9cb-bf66-467f-8f34-400ea7d780ee","Type":"ContainerStarted","Data":"25029c6c80b397552977ee37dcdd488ab4a9280789384398fd831316bb37474f"} Jan 31 16:40:35 crc kubenswrapper[4730]: I0131 16:40:35.128723 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-fjff6" Jan 31 16:40:35 crc kubenswrapper[4730]: I0131 16:40:35.144047 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lh2fv" podStartSLOduration=2.240205995 podStartE2EDuration="4.144024231s" podCreationTimestamp="2026-01-31 16:40:31 +0000 UTC" firstStartedPulling="2026-01-31 16:40:32.257594678 +0000 UTC m=+619.063651594" lastFinishedPulling="2026-01-31 16:40:34.161412904 +0000 UTC m=+620.967469830" observedRunningTime="2026-01-31 16:40:35.142110775 +0000 UTC m=+621.948167691" watchObservedRunningTime="2026-01-31 16:40:35.144024231 +0000 UTC m=+621.950081147" Jan 31 16:40:35 crc kubenswrapper[4730]: I0131 16:40:35.198233 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-fjff6" podStartSLOduration=1.5910073630000001 podStartE2EDuration="4.198213462s" podCreationTimestamp="2026-01-31 16:40:31 +0000 UTC" firstStartedPulling="2026-01-31 16:40:31.535194814 +0000 UTC m=+618.341251730" lastFinishedPulling="2026-01-31 16:40:34.142400913 +0000 UTC m=+620.948457829" observedRunningTime="2026-01-31 16:40:35.197770429 +0000 UTC m=+622.003827345" watchObservedRunningTime="2026-01-31 16:40:35.198213462 +0000 UTC m=+622.004270378" Jan 31 16:40:36 crc kubenswrapper[4730]: I0131 16:40:36.135371 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8" event={"ID":"485682da-cdf9-4bb1-ad07-06ed4ac7ff92","Type":"ContainerStarted","Data":"c3bf2f0cd2d09629533a102e8bd2f72c7ab2a2c433049d6a41ac09efa2b1b1e7"} Jan 31 16:40:36 crc kubenswrapper[4730]: I0131 16:40:36.152526 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-p6fl8" podStartSLOduration=2.201148833 podStartE2EDuration="5.152511519s" podCreationTimestamp="2026-01-31 16:40:31 +0000 UTC" firstStartedPulling="2026-01-31 16:40:32.408161623 +0000 UTC m=+619.214218539" lastFinishedPulling="2026-01-31 16:40:35.359524299 +0000 UTC m=+622.165581225" observedRunningTime="2026-01-31 16:40:36.150734138 +0000 UTC m=+622.956791054" watchObservedRunningTime="2026-01-31 16:40:36.152511519 +0000 UTC m=+622.958568435" Jan 31 16:40:37 crc kubenswrapper[4730]: I0131 16:40:37.146481 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-sk79b" event={"ID":"1e9f7b4c-83b7-465f-b684-8131c5e63277","Type":"ContainerStarted","Data":"200931a938c848a8f196c5d21ab9bcc3b0302520ead6ac0548489e059c71f727"} Jan 31 16:40:37 crc kubenswrapper[4730]: I0131 16:40:37.184661 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-sk79b" podStartSLOduration=1.438280746 podStartE2EDuration="6.184634282s" podCreationTimestamp="2026-01-31 16:40:31 +0000 UTC" firstStartedPulling="2026-01-31 16:40:31.700194348 +0000 UTC m=+618.506251254" lastFinishedPulling="2026-01-31 16:40:36.446547884 +0000 UTC m=+623.252604790" observedRunningTime="2026-01-31 16:40:37.17867408 +0000 UTC m=+623.984731036" watchObservedRunningTime="2026-01-31 16:40:37.184634282 +0000 UTC m=+623.990691228" Jan 31 16:40:41 crc kubenswrapper[4730]: I0131 16:40:41.505072 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-fjff6" Jan 31 16:40:41 crc kubenswrapper[4730]: I0131 16:40:41.871543 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:41 crc kubenswrapper[4730]: I0131 16:40:41.871630 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:41 crc kubenswrapper[4730]: I0131 16:40:41.881143 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:42 crc kubenswrapper[4730]: I0131 16:40:42.199089 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-ff94ddbd5-9wlth" Jan 31 16:40:42 crc kubenswrapper[4730]: I0131 16:40:42.284747 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6v2xk"] Jan 31 16:40:52 crc kubenswrapper[4730]: I0131 16:40:52.050482 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lh2fv" Jan 31 16:40:56 crc kubenswrapper[4730]: I0131 16:40:56.975297 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:40:56 crc kubenswrapper[4730]: I0131 16:40:56.975728 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:40:56 crc kubenswrapper[4730]: I0131 16:40:56.975798 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:40:56 crc kubenswrapper[4730]: I0131 16:40:56.977311 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81c316c56ff641f78d1454bdb69055b2cc577488dee85bfffb222944d2c0456f"} pod="openshift-machine-config-operator/machine-config-daemon-mzg47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 16:40:56 crc kubenswrapper[4730]: I0131 16:40:56.977380 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" containerID="cri-o://81c316c56ff641f78d1454bdb69055b2cc577488dee85bfffb222944d2c0456f" gracePeriod=600 Jan 31 16:40:57 crc kubenswrapper[4730]: I0131 16:40:57.289549 4730 generic.go:334] "Generic (PLEG): container finished" podID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerID="81c316c56ff641f78d1454bdb69055b2cc577488dee85bfffb222944d2c0456f" exitCode=0 Jan 31 16:40:57 crc kubenswrapper[4730]: I0131 16:40:57.289636 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerDied","Data":"81c316c56ff641f78d1454bdb69055b2cc577488dee85bfffb222944d2c0456f"} Jan 31 16:40:57 crc kubenswrapper[4730]: I0131 16:40:57.289959 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerStarted","Data":"d31bd001ee74e3469a2749b923f42adb83a31cb422ef5d9b45febe42584ea0e1"} Jan 31 16:40:57 crc kubenswrapper[4730]: I0131 16:40:57.289985 4730 scope.go:117] "RemoveContainer" containerID="9b11c9a3a6b003984d5cc7b0769b316d6026aca4dc2bc56230ee6ace4c824f75" Jan 31 16:41:05 crc kubenswrapper[4730]: I0131 16:41:05.167549 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h"] Jan 31 16:41:05 crc kubenswrapper[4730]: I0131 16:41:05.169291 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" Jan 31 16:41:05 crc kubenswrapper[4730]: I0131 16:41:05.171592 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 16:41:05 crc kubenswrapper[4730]: I0131 16:41:05.177650 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h"] Jan 31 16:41:05 crc kubenswrapper[4730]: I0131 16:41:05.247366 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/702064e1-dbb1-4b48-a075-2dc133933618-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h\" (UID: \"702064e1-dbb1-4b48-a075-2dc133933618\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" Jan 31 16:41:05 crc kubenswrapper[4730]: I0131 16:41:05.247428 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dfpc\" (UniqueName: \"kubernetes.io/projected/702064e1-dbb1-4b48-a075-2dc133933618-kube-api-access-2dfpc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h\" (UID: \"702064e1-dbb1-4b48-a075-2dc133933618\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" Jan 31 16:41:05 crc kubenswrapper[4730]: I0131 16:41:05.247450 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/702064e1-dbb1-4b48-a075-2dc133933618-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h\" (UID: \"702064e1-dbb1-4b48-a075-2dc133933618\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" Jan 31 16:41:05 crc kubenswrapper[4730]: I0131 16:41:05.348654 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/702064e1-dbb1-4b48-a075-2dc133933618-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h\" (UID: \"702064e1-dbb1-4b48-a075-2dc133933618\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" Jan 31 16:41:05 crc kubenswrapper[4730]: I0131 16:41:05.348692 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dfpc\" (UniqueName: \"kubernetes.io/projected/702064e1-dbb1-4b48-a075-2dc133933618-kube-api-access-2dfpc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h\" (UID: \"702064e1-dbb1-4b48-a075-2dc133933618\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" Jan 31 16:41:05 crc kubenswrapper[4730]: I0131 16:41:05.348713 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/702064e1-dbb1-4b48-a075-2dc133933618-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h\" (UID: \"702064e1-dbb1-4b48-a075-2dc133933618\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" Jan 31 16:41:05 crc kubenswrapper[4730]: I0131 16:41:05.349184 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/702064e1-dbb1-4b48-a075-2dc133933618-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h\" (UID: \"702064e1-dbb1-4b48-a075-2dc133933618\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" Jan 31 16:41:05 crc kubenswrapper[4730]: I0131 16:41:05.349187 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/702064e1-dbb1-4b48-a075-2dc133933618-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h\" (UID: \"702064e1-dbb1-4b48-a075-2dc133933618\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" Jan 31 16:41:05 crc kubenswrapper[4730]: I0131 16:41:05.373244 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dfpc\" (UniqueName: \"kubernetes.io/projected/702064e1-dbb1-4b48-a075-2dc133933618-kube-api-access-2dfpc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h\" (UID: \"702064e1-dbb1-4b48-a075-2dc133933618\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" Jan 31 16:41:05 crc kubenswrapper[4730]: I0131 16:41:05.483408 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" Jan 31 16:41:05 crc kubenswrapper[4730]: I0131 16:41:05.657627 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h"] Jan 31 16:41:06 crc kubenswrapper[4730]: I0131 16:41:06.349937 4730 generic.go:334] "Generic (PLEG): container finished" podID="702064e1-dbb1-4b48-a075-2dc133933618" containerID="4b9da417c545e04ffc12192518f18a8ddab879ac3a47030910871e64e87ec290" exitCode=0 Jan 31 16:41:06 crc kubenswrapper[4730]: I0131 16:41:06.350770 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" event={"ID":"702064e1-dbb1-4b48-a075-2dc133933618","Type":"ContainerDied","Data":"4b9da417c545e04ffc12192518f18a8ddab879ac3a47030910871e64e87ec290"} Jan 31 16:41:06 crc kubenswrapper[4730]: I0131 16:41:06.350892 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" event={"ID":"702064e1-dbb1-4b48-a075-2dc133933618","Type":"ContainerStarted","Data":"7ced3121178db246d593e472ac4304dea8b58dc1aa74d3cd869920c3667e65df"} Jan 31 16:41:07 crc kubenswrapper[4730]: I0131 16:41:07.332134 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-6v2xk" podUID="8100d0f3-9c7f-4835-b98a-c79cc76c29ef" containerName="console" containerID="cri-o://73e489a502d4014d96663b5efda6f88d633a4f2b9540446a0af47a25f929d416" gracePeriod=15 Jan 31 16:41:07 crc kubenswrapper[4730]: I0131 16:41:07.805193 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6v2xk_8100d0f3-9c7f-4835-b98a-c79cc76c29ef/console/0.log" Jan 31 16:41:07 crc kubenswrapper[4730]: I0131 16:41:07.805456 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:41:07 crc kubenswrapper[4730]: I0131 16:41:07.914687 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-oauth-config\") pod \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " Jan 31 16:41:07 crc kubenswrapper[4730]: I0131 16:41:07.914787 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-config\") pod \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " Jan 31 16:41:07 crc kubenswrapper[4730]: I0131 16:41:07.914896 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlw4r\" (UniqueName: \"kubernetes.io/projected/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-kube-api-access-zlw4r\") pod \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " Jan 31 16:41:07 crc kubenswrapper[4730]: I0131 16:41:07.914948 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-serving-cert\") pod \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " Jan 31 16:41:07 crc kubenswrapper[4730]: I0131 16:41:07.914979 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-service-ca\") pod \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " Jan 31 16:41:07 crc kubenswrapper[4730]: I0131 16:41:07.915030 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-oauth-serving-cert\") pod \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " Jan 31 16:41:07 crc kubenswrapper[4730]: I0131 16:41:07.915116 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-trusted-ca-bundle\") pod \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\" (UID: \"8100d0f3-9c7f-4835-b98a-c79cc76c29ef\") " Jan 31 16:41:07 crc kubenswrapper[4730]: I0131 16:41:07.915513 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-config" (OuterVolumeSpecName: "console-config") pod "8100d0f3-9c7f-4835-b98a-c79cc76c29ef" (UID: "8100d0f3-9c7f-4835-b98a-c79cc76c29ef"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:41:07 crc kubenswrapper[4730]: I0131 16:41:07.915900 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-service-ca" (OuterVolumeSpecName: "service-ca") pod "8100d0f3-9c7f-4835-b98a-c79cc76c29ef" (UID: "8100d0f3-9c7f-4835-b98a-c79cc76c29ef"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:41:07 crc kubenswrapper[4730]: I0131 16:41:07.916191 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8100d0f3-9c7f-4835-b98a-c79cc76c29ef" (UID: "8100d0f3-9c7f-4835-b98a-c79cc76c29ef"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:41:07 crc kubenswrapper[4730]: I0131 16:41:07.916184 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8100d0f3-9c7f-4835-b98a-c79cc76c29ef" (UID: "8100d0f3-9c7f-4835-b98a-c79cc76c29ef"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:41:07 crc kubenswrapper[4730]: I0131 16:41:07.920857 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-kube-api-access-zlw4r" (OuterVolumeSpecName: "kube-api-access-zlw4r") pod "8100d0f3-9c7f-4835-b98a-c79cc76c29ef" (UID: "8100d0f3-9c7f-4835-b98a-c79cc76c29ef"). InnerVolumeSpecName "kube-api-access-zlw4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:41:07 crc kubenswrapper[4730]: I0131 16:41:07.925055 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8100d0f3-9c7f-4835-b98a-c79cc76c29ef" (UID: "8100d0f3-9c7f-4835-b98a-c79cc76c29ef"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:41:07 crc kubenswrapper[4730]: I0131 16:41:07.927256 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8100d0f3-9c7f-4835-b98a-c79cc76c29ef" (UID: "8100d0f3-9c7f-4835-b98a-c79cc76c29ef"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.016462 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.016519 4730 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.016539 4730 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.016560 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlw4r\" (UniqueName: \"kubernetes.io/projected/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-kube-api-access-zlw4r\") on node \"crc\" DevicePath \"\"" Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.016582 4730 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.016602 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.016619 4730 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8100d0f3-9c7f-4835-b98a-c79cc76c29ef-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.366035 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6v2xk_8100d0f3-9c7f-4835-b98a-c79cc76c29ef/console/0.log" Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.366087 4730 generic.go:334] "Generic (PLEG): container finished" podID="8100d0f3-9c7f-4835-b98a-c79cc76c29ef" containerID="73e489a502d4014d96663b5efda6f88d633a4f2b9540446a0af47a25f929d416" exitCode=2 Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.366154 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6v2xk" event={"ID":"8100d0f3-9c7f-4835-b98a-c79cc76c29ef","Type":"ContainerDied","Data":"73e489a502d4014d96663b5efda6f88d633a4f2b9540446a0af47a25f929d416"} Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.366182 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6v2xk" event={"ID":"8100d0f3-9c7f-4835-b98a-c79cc76c29ef","Type":"ContainerDied","Data":"4f31d040924df93618ff60ca51aea1dccb98144352f6fe4a04eeb38de3651fc6"} Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.366187 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6v2xk" Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.366197 4730 scope.go:117] "RemoveContainer" containerID="73e489a502d4014d96663b5efda6f88d633a4f2b9540446a0af47a25f929d416" Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.368968 4730 generic.go:334] "Generic (PLEG): container finished" podID="702064e1-dbb1-4b48-a075-2dc133933618" containerID="338e201d6ca3b278c05e6dc4c3d77a3e6b2b3ba05b5caed6c034ae96e3e8fd3c" exitCode=0 Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.369019 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" event={"ID":"702064e1-dbb1-4b48-a075-2dc133933618","Type":"ContainerDied","Data":"338e201d6ca3b278c05e6dc4c3d77a3e6b2b3ba05b5caed6c034ae96e3e8fd3c"} Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.390930 4730 scope.go:117] "RemoveContainer" containerID="73e489a502d4014d96663b5efda6f88d633a4f2b9540446a0af47a25f929d416" Jan 31 16:41:08 crc kubenswrapper[4730]: E0131 16:41:08.395957 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73e489a502d4014d96663b5efda6f88d633a4f2b9540446a0af47a25f929d416\": container with ID starting with 73e489a502d4014d96663b5efda6f88d633a4f2b9540446a0af47a25f929d416 not found: ID does not exist" containerID="73e489a502d4014d96663b5efda6f88d633a4f2b9540446a0af47a25f929d416" Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.396022 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73e489a502d4014d96663b5efda6f88d633a4f2b9540446a0af47a25f929d416"} err="failed to get container status \"73e489a502d4014d96663b5efda6f88d633a4f2b9540446a0af47a25f929d416\": rpc error: code = NotFound desc = could not find container \"73e489a502d4014d96663b5efda6f88d633a4f2b9540446a0af47a25f929d416\": container with ID starting with 73e489a502d4014d96663b5efda6f88d633a4f2b9540446a0af47a25f929d416 not found: ID does not exist" Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.426582 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6v2xk"] Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.429602 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-6v2xk"] Jan 31 16:41:08 crc kubenswrapper[4730]: I0131 16:41:08.473956 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8100d0f3-9c7f-4835-b98a-c79cc76c29ef" path="/var/lib/kubelet/pods/8100d0f3-9c7f-4835-b98a-c79cc76c29ef/volumes" Jan 31 16:41:09 crc kubenswrapper[4730]: I0131 16:41:09.382597 4730 generic.go:334] "Generic (PLEG): container finished" podID="702064e1-dbb1-4b48-a075-2dc133933618" containerID="06c1e3db2b51d686e91fc49f7ab5dc7e66246156d8311d042ca2e30e2a0dce88" exitCode=0 Jan 31 16:41:09 crc kubenswrapper[4730]: I0131 16:41:09.382939 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" event={"ID":"702064e1-dbb1-4b48-a075-2dc133933618","Type":"ContainerDied","Data":"06c1e3db2b51d686e91fc49f7ab5dc7e66246156d8311d042ca2e30e2a0dce88"} Jan 31 16:41:10 crc kubenswrapper[4730]: I0131 16:41:10.674136 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" Jan 31 16:41:10 crc kubenswrapper[4730]: I0131 16:41:10.855456 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/702064e1-dbb1-4b48-a075-2dc133933618-bundle\") pod \"702064e1-dbb1-4b48-a075-2dc133933618\" (UID: \"702064e1-dbb1-4b48-a075-2dc133933618\") " Jan 31 16:41:10 crc kubenswrapper[4730]: I0131 16:41:10.855580 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/702064e1-dbb1-4b48-a075-2dc133933618-util\") pod \"702064e1-dbb1-4b48-a075-2dc133933618\" (UID: \"702064e1-dbb1-4b48-a075-2dc133933618\") " Jan 31 16:41:10 crc kubenswrapper[4730]: I0131 16:41:10.855641 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dfpc\" (UniqueName: \"kubernetes.io/projected/702064e1-dbb1-4b48-a075-2dc133933618-kube-api-access-2dfpc\") pod \"702064e1-dbb1-4b48-a075-2dc133933618\" (UID: \"702064e1-dbb1-4b48-a075-2dc133933618\") " Jan 31 16:41:10 crc kubenswrapper[4730]: I0131 16:41:10.856907 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/702064e1-dbb1-4b48-a075-2dc133933618-bundle" (OuterVolumeSpecName: "bundle") pod "702064e1-dbb1-4b48-a075-2dc133933618" (UID: "702064e1-dbb1-4b48-a075-2dc133933618"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:41:10 crc kubenswrapper[4730]: I0131 16:41:10.863300 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/702064e1-dbb1-4b48-a075-2dc133933618-kube-api-access-2dfpc" (OuterVolumeSpecName: "kube-api-access-2dfpc") pod "702064e1-dbb1-4b48-a075-2dc133933618" (UID: "702064e1-dbb1-4b48-a075-2dc133933618"). InnerVolumeSpecName "kube-api-access-2dfpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:41:10 crc kubenswrapper[4730]: I0131 16:41:10.876757 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/702064e1-dbb1-4b48-a075-2dc133933618-util" (OuterVolumeSpecName: "util") pod "702064e1-dbb1-4b48-a075-2dc133933618" (UID: "702064e1-dbb1-4b48-a075-2dc133933618"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:41:10 crc kubenswrapper[4730]: I0131 16:41:10.957177 4730 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/702064e1-dbb1-4b48-a075-2dc133933618-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:41:10 crc kubenswrapper[4730]: I0131 16:41:10.957222 4730 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/702064e1-dbb1-4b48-a075-2dc133933618-util\") on node \"crc\" DevicePath \"\"" Jan 31 16:41:10 crc kubenswrapper[4730]: I0131 16:41:10.957241 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dfpc\" (UniqueName: \"kubernetes.io/projected/702064e1-dbb1-4b48-a075-2dc133933618-kube-api-access-2dfpc\") on node \"crc\" DevicePath \"\"" Jan 31 16:41:11 crc kubenswrapper[4730]: I0131 16:41:11.397073 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" event={"ID":"702064e1-dbb1-4b48-a075-2dc133933618","Type":"ContainerDied","Data":"7ced3121178db246d593e472ac4304dea8b58dc1aa74d3cd869920c3667e65df"} Jan 31 16:41:11 crc kubenswrapper[4730]: I0131 16:41:11.397414 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ced3121178db246d593e472ac4304dea8b58dc1aa74d3cd869920c3667e65df" Jan 31 16:41:11 crc kubenswrapper[4730]: I0131 16:41:11.397151 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.680962 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh"] Jan 31 16:41:19 crc kubenswrapper[4730]: E0131 16:41:19.681682 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="702064e1-dbb1-4b48-a075-2dc133933618" containerName="pull" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.681695 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="702064e1-dbb1-4b48-a075-2dc133933618" containerName="pull" Jan 31 16:41:19 crc kubenswrapper[4730]: E0131 16:41:19.681704 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="702064e1-dbb1-4b48-a075-2dc133933618" containerName="extract" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.681711 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="702064e1-dbb1-4b48-a075-2dc133933618" containerName="extract" Jan 31 16:41:19 crc kubenswrapper[4730]: E0131 16:41:19.681720 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="702064e1-dbb1-4b48-a075-2dc133933618" containerName="util" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.681727 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="702064e1-dbb1-4b48-a075-2dc133933618" containerName="util" Jan 31 16:41:19 crc kubenswrapper[4730]: E0131 16:41:19.681736 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8100d0f3-9c7f-4835-b98a-c79cc76c29ef" containerName="console" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.681743 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8100d0f3-9c7f-4835-b98a-c79cc76c29ef" containerName="console" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.681964 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8100d0f3-9c7f-4835-b98a-c79cc76c29ef" containerName="console" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.681975 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="702064e1-dbb1-4b48-a075-2dc133933618" containerName="extract" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.682348 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.684767 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.684878 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.684934 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-cgtrh" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.685092 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.685204 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.760374 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9kbz\" (UniqueName: \"kubernetes.io/projected/59226704-24cc-4677-bb59-408503c70795-kube-api-access-z9kbz\") pod \"metallb-operator-controller-manager-56c885bfd6-vqnrh\" (UID: \"59226704-24cc-4677-bb59-408503c70795\") " pod="metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.760436 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59226704-24cc-4677-bb59-408503c70795-webhook-cert\") pod \"metallb-operator-controller-manager-56c885bfd6-vqnrh\" (UID: \"59226704-24cc-4677-bb59-408503c70795\") " pod="metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.760635 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59226704-24cc-4677-bb59-408503c70795-apiservice-cert\") pod \"metallb-operator-controller-manager-56c885bfd6-vqnrh\" (UID: \"59226704-24cc-4677-bb59-408503c70795\") " pod="metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.774924 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh"] Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.866238 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59226704-24cc-4677-bb59-408503c70795-apiservice-cert\") pod \"metallb-operator-controller-manager-56c885bfd6-vqnrh\" (UID: \"59226704-24cc-4677-bb59-408503c70795\") " pod="metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.866373 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9kbz\" (UniqueName: \"kubernetes.io/projected/59226704-24cc-4677-bb59-408503c70795-kube-api-access-z9kbz\") pod \"metallb-operator-controller-manager-56c885bfd6-vqnrh\" (UID: \"59226704-24cc-4677-bb59-408503c70795\") " pod="metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.866428 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59226704-24cc-4677-bb59-408503c70795-webhook-cert\") pod \"metallb-operator-controller-manager-56c885bfd6-vqnrh\" (UID: \"59226704-24cc-4677-bb59-408503c70795\") " pod="metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.884439 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59226704-24cc-4677-bb59-408503c70795-webhook-cert\") pod \"metallb-operator-controller-manager-56c885bfd6-vqnrh\" (UID: \"59226704-24cc-4677-bb59-408503c70795\") " pod="metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.899522 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59226704-24cc-4677-bb59-408503c70795-apiservice-cert\") pod \"metallb-operator-controller-manager-56c885bfd6-vqnrh\" (UID: \"59226704-24cc-4677-bb59-408503c70795\") " pod="metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.907031 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9kbz\" (UniqueName: \"kubernetes.io/projected/59226704-24cc-4677-bb59-408503c70795-kube-api-access-z9kbz\") pod \"metallb-operator-controller-manager-56c885bfd6-vqnrh\" (UID: \"59226704-24cc-4677-bb59-408503c70795\") " pod="metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.991607 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt"] Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.992238 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.995464 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-stwtg" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.995552 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.995581 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 16:41:19 crc kubenswrapper[4730]: I0131 16:41:19.998143 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 31 16:41:20 crc kubenswrapper[4730]: I0131 16:41:20.009495 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt"] Jan 31 16:41:20 crc kubenswrapper[4730]: I0131 16:41:20.069293 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/430bc339-5bd3-4873-94e9-229d6861a1ba-webhook-cert\") pod \"metallb-operator-webhook-server-545856c6bc-fnppt\" (UID: \"430bc339-5bd3-4873-94e9-229d6861a1ba\") " pod="metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt" Jan 31 16:41:20 crc kubenswrapper[4730]: I0131 16:41:20.069345 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8dwg\" (UniqueName: \"kubernetes.io/projected/430bc339-5bd3-4873-94e9-229d6861a1ba-kube-api-access-v8dwg\") pod \"metallb-operator-webhook-server-545856c6bc-fnppt\" (UID: \"430bc339-5bd3-4873-94e9-229d6861a1ba\") " pod="metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt" Jan 31 16:41:20 crc kubenswrapper[4730]: I0131 16:41:20.069388 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/430bc339-5bd3-4873-94e9-229d6861a1ba-apiservice-cert\") pod \"metallb-operator-webhook-server-545856c6bc-fnppt\" (UID: \"430bc339-5bd3-4873-94e9-229d6861a1ba\") " pod="metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt" Jan 31 16:41:20 crc kubenswrapper[4730]: I0131 16:41:20.172189 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/430bc339-5bd3-4873-94e9-229d6861a1ba-apiservice-cert\") pod \"metallb-operator-webhook-server-545856c6bc-fnppt\" (UID: \"430bc339-5bd3-4873-94e9-229d6861a1ba\") " pod="metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt" Jan 31 16:41:20 crc kubenswrapper[4730]: I0131 16:41:20.172513 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/430bc339-5bd3-4873-94e9-229d6861a1ba-webhook-cert\") pod \"metallb-operator-webhook-server-545856c6bc-fnppt\" (UID: \"430bc339-5bd3-4873-94e9-229d6861a1ba\") " pod="metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt" Jan 31 16:41:20 crc kubenswrapper[4730]: I0131 16:41:20.172561 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8dwg\" (UniqueName: \"kubernetes.io/projected/430bc339-5bd3-4873-94e9-229d6861a1ba-kube-api-access-v8dwg\") pod \"metallb-operator-webhook-server-545856c6bc-fnppt\" (UID: \"430bc339-5bd3-4873-94e9-229d6861a1ba\") " pod="metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt" Jan 31 16:41:20 crc kubenswrapper[4730]: I0131 16:41:20.177218 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/430bc339-5bd3-4873-94e9-229d6861a1ba-apiservice-cert\") pod \"metallb-operator-webhook-server-545856c6bc-fnppt\" (UID: \"430bc339-5bd3-4873-94e9-229d6861a1ba\") " pod="metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt" Jan 31 16:41:20 crc kubenswrapper[4730]: I0131 16:41:20.177841 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/430bc339-5bd3-4873-94e9-229d6861a1ba-webhook-cert\") pod \"metallb-operator-webhook-server-545856c6bc-fnppt\" (UID: \"430bc339-5bd3-4873-94e9-229d6861a1ba\") " pod="metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt" Jan 31 16:41:20 crc kubenswrapper[4730]: I0131 16:41:20.194497 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8dwg\" (UniqueName: \"kubernetes.io/projected/430bc339-5bd3-4873-94e9-229d6861a1ba-kube-api-access-v8dwg\") pod \"metallb-operator-webhook-server-545856c6bc-fnppt\" (UID: \"430bc339-5bd3-4873-94e9-229d6861a1ba\") " pod="metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt" Jan 31 16:41:20 crc kubenswrapper[4730]: I0131 16:41:20.290089 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh"] Jan 31 16:41:20 crc kubenswrapper[4730]: I0131 16:41:20.363646 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt" Jan 31 16:41:20 crc kubenswrapper[4730]: I0131 16:41:20.452569 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh" event={"ID":"59226704-24cc-4677-bb59-408503c70795","Type":"ContainerStarted","Data":"74333325ed1417f73690890d0e363787a6963b1034527d3627ce84bea11c4a6a"} Jan 31 16:41:20 crc kubenswrapper[4730]: I0131 16:41:20.801760 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt"] Jan 31 16:41:20 crc kubenswrapper[4730]: W0131 16:41:20.808201 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod430bc339_5bd3_4873_94e9_229d6861a1ba.slice/crio-6382843820c8843dd32d3de69d7c161f66fbfa1b0469ac7618bf5c7f6d5e9c55 WatchSource:0}: Error finding container 6382843820c8843dd32d3de69d7c161f66fbfa1b0469ac7618bf5c7f6d5e9c55: Status 404 returned error can't find the container with id 6382843820c8843dd32d3de69d7c161f66fbfa1b0469ac7618bf5c7f6d5e9c55 Jan 31 16:41:21 crc kubenswrapper[4730]: I0131 16:41:21.459472 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt" event={"ID":"430bc339-5bd3-4873-94e9-229d6861a1ba","Type":"ContainerStarted","Data":"6382843820c8843dd32d3de69d7c161f66fbfa1b0469ac7618bf5c7f6d5e9c55"} Jan 31 16:41:24 crc kubenswrapper[4730]: I0131 16:41:24.478207 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh" event={"ID":"59226704-24cc-4677-bb59-408503c70795","Type":"ContainerStarted","Data":"f2b677217f970eca47d95a83618b3e7c9d34bb8c27b9d80de79a96d76c73a85f"} Jan 31 16:41:24 crc kubenswrapper[4730]: I0131 16:41:24.478982 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh" Jan 31 16:41:24 crc kubenswrapper[4730]: I0131 16:41:24.537991 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh" podStartSLOduration=2.284592343 podStartE2EDuration="5.537969115s" podCreationTimestamp="2026-01-31 16:41:19 +0000 UTC" firstStartedPulling="2026-01-31 16:41:20.302306835 +0000 UTC m=+667.108363741" lastFinishedPulling="2026-01-31 16:41:23.555683597 +0000 UTC m=+670.361740513" observedRunningTime="2026-01-31 16:41:24.529955643 +0000 UTC m=+671.336012569" watchObservedRunningTime="2026-01-31 16:41:24.537969115 +0000 UTC m=+671.344026021" Jan 31 16:41:26 crc kubenswrapper[4730]: I0131 16:41:26.492884 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt" event={"ID":"430bc339-5bd3-4873-94e9-229d6861a1ba","Type":"ContainerStarted","Data":"f0fdda206fc08590c46e9bd45ab8745f3ba65039f1f3508fc26d04ca7508ae0d"} Jan 31 16:41:26 crc kubenswrapper[4730]: I0131 16:41:26.493192 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt" Jan 31 16:41:26 crc kubenswrapper[4730]: I0131 16:41:26.515821 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt" podStartSLOduration=2.81424863 podStartE2EDuration="7.515772485s" podCreationTimestamp="2026-01-31 16:41:19 +0000 UTC" firstStartedPulling="2026-01-31 16:41:20.811457367 +0000 UTC m=+667.617514283" lastFinishedPulling="2026-01-31 16:41:25.512981222 +0000 UTC m=+672.319038138" observedRunningTime="2026-01-31 16:41:26.510495282 +0000 UTC m=+673.316552198" watchObservedRunningTime="2026-01-31 16:41:26.515772485 +0000 UTC m=+673.321829431" Jan 31 16:41:40 crc kubenswrapper[4730]: I0131 16:41:40.370251 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-545856c6bc-fnppt" Jan 31 16:41:59 crc kubenswrapper[4730]: I0131 16:41:59.998449 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-56c885bfd6-vqnrh" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.738528 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-b2bpp"] Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.740673 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.742517 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.742925 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.744567 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-7fjkl" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.763690 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8lbph"] Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.764597 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8lbph" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.766427 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.776559 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8lbph"] Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.847699 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6b47a859-3bb1-4179-9cc2-8274173a22d4-frr-sockets\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.847787 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trp8q\" (UniqueName: \"kubernetes.io/projected/6b47a859-3bb1-4179-9cc2-8274173a22d4-kube-api-access-trp8q\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.847830 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6b47a859-3bb1-4179-9cc2-8274173a22d4-metrics\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.847845 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b47a859-3bb1-4179-9cc2-8274173a22d4-metrics-certs\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.848122 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6b47a859-3bb1-4179-9cc2-8274173a22d4-frr-startup\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.848146 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6b47a859-3bb1-4179-9cc2-8274173a22d4-frr-conf\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.848161 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6b47a859-3bb1-4179-9cc2-8274173a22d4-reloader\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.877824 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-xxzrr"] Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.878931 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xxzrr" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.886854 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.888640 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.890268 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-j87zt" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.900408 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.902025 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-zv6nq"] Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.902822 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-zv6nq" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.903738 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.916214 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-zv6nq"] Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.949136 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-memberlist\") pod \"speaker-xxzrr\" (UID: \"da3276e9-6b00-45e8-8db5-6bfc6f7f276f\") " pod="metallb-system/speaker-xxzrr" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.949181 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6b47a859-3bb1-4179-9cc2-8274173a22d4-frr-sockets\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.949205 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-metrics-certs\") pod \"speaker-xxzrr\" (UID: \"da3276e9-6b00-45e8-8db5-6bfc6f7f276f\") " pod="metallb-system/speaker-xxzrr" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.949228 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08181da5-8c97-4a4a-bfaf-f0f300cacf5b-metrics-certs\") pod \"controller-6968d8fdc4-zv6nq\" (UID: \"08181da5-8c97-4a4a-bfaf-f0f300cacf5b\") " pod="metallb-system/controller-6968d8fdc4-zv6nq" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.949250 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/129f61a1-e50c-4f81-a931-d9924c771c4f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8lbph\" (UID: \"129f61a1-e50c-4f81-a931-d9924c771c4f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8lbph" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.949268 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08181da5-8c97-4a4a-bfaf-f0f300cacf5b-cert\") pod \"controller-6968d8fdc4-zv6nq\" (UID: \"08181da5-8c97-4a4a-bfaf-f0f300cacf5b\") " pod="metallb-system/controller-6968d8fdc4-zv6nq" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.949298 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trp8q\" (UniqueName: \"kubernetes.io/projected/6b47a859-3bb1-4179-9cc2-8274173a22d4-kube-api-access-trp8q\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.949320 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b47a859-3bb1-4179-9cc2-8274173a22d4-metrics-certs\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.949335 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6b47a859-3bb1-4179-9cc2-8274173a22d4-metrics\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.949356 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q492z\" (UniqueName: \"kubernetes.io/projected/08181da5-8c97-4a4a-bfaf-f0f300cacf5b-kube-api-access-q492z\") pod \"controller-6968d8fdc4-zv6nq\" (UID: \"08181da5-8c97-4a4a-bfaf-f0f300cacf5b\") " pod="metallb-system/controller-6968d8fdc4-zv6nq" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.949374 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6b47a859-3bb1-4179-9cc2-8274173a22d4-frr-startup\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.949388 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-metallb-excludel2\") pod \"speaker-xxzrr\" (UID: \"da3276e9-6b00-45e8-8db5-6bfc6f7f276f\") " pod="metallb-system/speaker-xxzrr" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.949404 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6b47a859-3bb1-4179-9cc2-8274173a22d4-frr-conf\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.949421 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6b47a859-3bb1-4179-9cc2-8274173a22d4-reloader\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.949442 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfhmk\" (UniqueName: \"kubernetes.io/projected/129f61a1-e50c-4f81-a931-d9924c771c4f-kube-api-access-qfhmk\") pod \"frr-k8s-webhook-server-7df86c4f6c-8lbph\" (UID: \"129f61a1-e50c-4f81-a931-d9924c771c4f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8lbph" Jan 31 16:42:00 crc kubenswrapper[4730]: E0131 16:42:00.949935 4730 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 31 16:42:00 crc kubenswrapper[4730]: E0131 16:42:00.950083 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b47a859-3bb1-4179-9cc2-8274173a22d4-metrics-certs podName:6b47a859-3bb1-4179-9cc2-8274173a22d4 nodeName:}" failed. No retries permitted until 2026-01-31 16:42:01.450061731 +0000 UTC m=+708.256118647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b47a859-3bb1-4179-9cc2-8274173a22d4-metrics-certs") pod "frr-k8s-b2bpp" (UID: "6b47a859-3bb1-4179-9cc2-8274173a22d4") : secret "frr-k8s-certs-secret" not found Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.950154 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6b47a859-3bb1-4179-9cc2-8274173a22d4-metrics\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.949993 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6b47a859-3bb1-4179-9cc2-8274173a22d4-frr-sockets\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.950629 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6b47a859-3bb1-4179-9cc2-8274173a22d4-frr-conf\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.950836 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6b47a859-3bb1-4179-9cc2-8274173a22d4-reloader\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.951038 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6b47a859-3bb1-4179-9cc2-8274173a22d4-frr-startup\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:00 crc kubenswrapper[4730]: I0131 16:42:00.980025 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trp8q\" (UniqueName: \"kubernetes.io/projected/6b47a859-3bb1-4179-9cc2-8274173a22d4-kube-api-access-trp8q\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.051116 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-memberlist\") pod \"speaker-xxzrr\" (UID: \"da3276e9-6b00-45e8-8db5-6bfc6f7f276f\") " pod="metallb-system/speaker-xxzrr" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.051168 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-metrics-certs\") pod \"speaker-xxzrr\" (UID: \"da3276e9-6b00-45e8-8db5-6bfc6f7f276f\") " pod="metallb-system/speaker-xxzrr" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.051199 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08181da5-8c97-4a4a-bfaf-f0f300cacf5b-metrics-certs\") pod \"controller-6968d8fdc4-zv6nq\" (UID: \"08181da5-8c97-4a4a-bfaf-f0f300cacf5b\") " pod="metallb-system/controller-6968d8fdc4-zv6nq" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.051223 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/129f61a1-e50c-4f81-a931-d9924c771c4f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8lbph\" (UID: \"129f61a1-e50c-4f81-a931-d9924c771c4f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8lbph" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.051244 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08181da5-8c97-4a4a-bfaf-f0f300cacf5b-cert\") pod \"controller-6968d8fdc4-zv6nq\" (UID: \"08181da5-8c97-4a4a-bfaf-f0f300cacf5b\") " pod="metallb-system/controller-6968d8fdc4-zv6nq" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.051299 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q492z\" (UniqueName: \"kubernetes.io/projected/08181da5-8c97-4a4a-bfaf-f0f300cacf5b-kube-api-access-q492z\") pod \"controller-6968d8fdc4-zv6nq\" (UID: \"08181da5-8c97-4a4a-bfaf-f0f300cacf5b\") " pod="metallb-system/controller-6968d8fdc4-zv6nq" Jan 31 16:42:01 crc kubenswrapper[4730]: E0131 16:42:01.051300 4730 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.051317 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-metallb-excludel2\") pod \"speaker-xxzrr\" (UID: \"da3276e9-6b00-45e8-8db5-6bfc6f7f276f\") " pod="metallb-system/speaker-xxzrr" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.051342 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfqnw\" (UniqueName: \"kubernetes.io/projected/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-kube-api-access-hfqnw\") pod \"speaker-xxzrr\" (UID: \"da3276e9-6b00-45e8-8db5-6bfc6f7f276f\") " pod="metallb-system/speaker-xxzrr" Jan 31 16:42:01 crc kubenswrapper[4730]: E0131 16:42:01.051369 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-memberlist podName:da3276e9-6b00-45e8-8db5-6bfc6f7f276f nodeName:}" failed. No retries permitted until 2026-01-31 16:42:01.551351198 +0000 UTC m=+708.357408114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-memberlist") pod "speaker-xxzrr" (UID: "da3276e9-6b00-45e8-8db5-6bfc6f7f276f") : secret "metallb-memberlist" not found Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.051394 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfhmk\" (UniqueName: \"kubernetes.io/projected/129f61a1-e50c-4f81-a931-d9924c771c4f-kube-api-access-qfhmk\") pod \"frr-k8s-webhook-server-7df86c4f6c-8lbph\" (UID: \"129f61a1-e50c-4f81-a931-d9924c771c4f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8lbph" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.052694 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-metallb-excludel2\") pod \"speaker-xxzrr\" (UID: \"da3276e9-6b00-45e8-8db5-6bfc6f7f276f\") " pod="metallb-system/speaker-xxzrr" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.054156 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-metrics-certs\") pod \"speaker-xxzrr\" (UID: \"da3276e9-6b00-45e8-8db5-6bfc6f7f276f\") " pod="metallb-system/speaker-xxzrr" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.054666 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/129f61a1-e50c-4f81-a931-d9924c771c4f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8lbph\" (UID: \"129f61a1-e50c-4f81-a931-d9924c771c4f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8lbph" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.055730 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08181da5-8c97-4a4a-bfaf-f0f300cacf5b-metrics-certs\") pod \"controller-6968d8fdc4-zv6nq\" (UID: \"08181da5-8c97-4a4a-bfaf-f0f300cacf5b\") " pod="metallb-system/controller-6968d8fdc4-zv6nq" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.056295 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08181da5-8c97-4a4a-bfaf-f0f300cacf5b-cert\") pod \"controller-6968d8fdc4-zv6nq\" (UID: \"08181da5-8c97-4a4a-bfaf-f0f300cacf5b\") " pod="metallb-system/controller-6968d8fdc4-zv6nq" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.070576 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfhmk\" (UniqueName: \"kubernetes.io/projected/129f61a1-e50c-4f81-a931-d9924c771c4f-kube-api-access-qfhmk\") pod \"frr-k8s-webhook-server-7df86c4f6c-8lbph\" (UID: \"129f61a1-e50c-4f81-a931-d9924c771c4f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8lbph" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.076907 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q492z\" (UniqueName: \"kubernetes.io/projected/08181da5-8c97-4a4a-bfaf-f0f300cacf5b-kube-api-access-q492z\") pod \"controller-6968d8fdc4-zv6nq\" (UID: \"08181da5-8c97-4a4a-bfaf-f0f300cacf5b\") " pod="metallb-system/controller-6968d8fdc4-zv6nq" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.078667 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8lbph" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.152091 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfqnw\" (UniqueName: \"kubernetes.io/projected/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-kube-api-access-hfqnw\") pod \"speaker-xxzrr\" (UID: \"da3276e9-6b00-45e8-8db5-6bfc6f7f276f\") " pod="metallb-system/speaker-xxzrr" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.169426 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfqnw\" (UniqueName: \"kubernetes.io/projected/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-kube-api-access-hfqnw\") pod \"speaker-xxzrr\" (UID: \"da3276e9-6b00-45e8-8db5-6bfc6f7f276f\") " pod="metallb-system/speaker-xxzrr" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.215325 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-zv6nq" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.388844 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-zv6nq"] Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.457224 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b47a859-3bb1-4179-9cc2-8274173a22d4-metrics-certs\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.464063 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b47a859-3bb1-4179-9cc2-8274173a22d4-metrics-certs\") pod \"frr-k8s-b2bpp\" (UID: \"6b47a859-3bb1-4179-9cc2-8274173a22d4\") " pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.487306 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8lbph"] Jan 31 16:42:01 crc kubenswrapper[4730]: W0131 16:42:01.492003 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod129f61a1_e50c_4f81_a931_d9924c771c4f.slice/crio-e2bb59bce950be79e0fb6729cad2e433d315c5923d3dd7958b349f3fdeef07e0 WatchSource:0}: Error finding container e2bb59bce950be79e0fb6729cad2e433d315c5923d3dd7958b349f3fdeef07e0: Status 404 returned error can't find the container with id e2bb59bce950be79e0fb6729cad2e433d315c5923d3dd7958b349f3fdeef07e0 Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.558234 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-memberlist\") pod \"speaker-xxzrr\" (UID: \"da3276e9-6b00-45e8-8db5-6bfc6f7f276f\") " pod="metallb-system/speaker-xxzrr" Jan 31 16:42:01 crc kubenswrapper[4730]: E0131 16:42:01.558774 4730 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 16:42:01 crc kubenswrapper[4730]: E0131 16:42:01.558911 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-memberlist podName:da3276e9-6b00-45e8-8db5-6bfc6f7f276f nodeName:}" failed. No retries permitted until 2026-01-31 16:42:02.558897143 +0000 UTC m=+709.364954059 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-memberlist") pod "speaker-xxzrr" (UID: "da3276e9-6b00-45e8-8db5-6bfc6f7f276f") : secret "metallb-memberlist" not found Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.659263 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.711084 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8lbph" event={"ID":"129f61a1-e50c-4f81-a931-d9924c771c4f","Type":"ContainerStarted","Data":"e2bb59bce950be79e0fb6729cad2e433d315c5923d3dd7958b349f3fdeef07e0"} Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.713502 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zv6nq" event={"ID":"08181da5-8c97-4a4a-bfaf-f0f300cacf5b","Type":"ContainerStarted","Data":"a1eaa81baf64c2be91f6491eb010e7e716b2f8288575bd37755fec49c13fe7be"} Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.713534 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zv6nq" event={"ID":"08181da5-8c97-4a4a-bfaf-f0f300cacf5b","Type":"ContainerStarted","Data":"6a2889621537886f51d0812f05b2adb76eade6c198a7afbd2468699953e1d5a3"} Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.713552 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-zv6nq" event={"ID":"08181da5-8c97-4a4a-bfaf-f0f300cacf5b","Type":"ContainerStarted","Data":"dd7f01a30dadf540d4c5009230bc58dcf6984ef8f29addb2359b1f5422637c35"} Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.713685 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-zv6nq" Jan 31 16:42:01 crc kubenswrapper[4730]: I0131 16:42:01.731933 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-zv6nq" podStartSLOduration=1.731898908 podStartE2EDuration="1.731898908s" podCreationTimestamp="2026-01-31 16:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:42:01.727556192 +0000 UTC m=+708.533613108" watchObservedRunningTime="2026-01-31 16:42:01.731898908 +0000 UTC m=+708.537955864" Jan 31 16:42:02 crc kubenswrapper[4730]: I0131 16:42:02.571950 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-memberlist\") pod \"speaker-xxzrr\" (UID: \"da3276e9-6b00-45e8-8db5-6bfc6f7f276f\") " pod="metallb-system/speaker-xxzrr" Jan 31 16:42:02 crc kubenswrapper[4730]: I0131 16:42:02.587515 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/da3276e9-6b00-45e8-8db5-6bfc6f7f276f-memberlist\") pod \"speaker-xxzrr\" (UID: \"da3276e9-6b00-45e8-8db5-6bfc6f7f276f\") " pod="metallb-system/speaker-xxzrr" Jan 31 16:42:02 crc kubenswrapper[4730]: I0131 16:42:02.693793 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xxzrr" Jan 31 16:42:02 crc kubenswrapper[4730]: W0131 16:42:02.713014 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda3276e9_6b00_45e8_8db5_6bfc6f7f276f.slice/crio-2c96f8a395523323948e6bf8af077de10ce4477a3cfe3b5d8a1b9d260f1b7efb WatchSource:0}: Error finding container 2c96f8a395523323948e6bf8af077de10ce4477a3cfe3b5d8a1b9d260f1b7efb: Status 404 returned error can't find the container with id 2c96f8a395523323948e6bf8af077de10ce4477a3cfe3b5d8a1b9d260f1b7efb Jan 31 16:42:02 crc kubenswrapper[4730]: I0131 16:42:02.721397 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xxzrr" event={"ID":"da3276e9-6b00-45e8-8db5-6bfc6f7f276f","Type":"ContainerStarted","Data":"2c96f8a395523323948e6bf8af077de10ce4477a3cfe3b5d8a1b9d260f1b7efb"} Jan 31 16:42:02 crc kubenswrapper[4730]: I0131 16:42:02.723688 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b2bpp" event={"ID":"6b47a859-3bb1-4179-9cc2-8274173a22d4","Type":"ContainerStarted","Data":"fa55ab8ffe44eb68fcec4fc963b248cd65412c4ed0ae4b1fc231f1dea2aa223b"} Jan 31 16:42:03 crc kubenswrapper[4730]: I0131 16:42:03.735242 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xxzrr" event={"ID":"da3276e9-6b00-45e8-8db5-6bfc6f7f276f","Type":"ContainerStarted","Data":"e9cee87e355a909fca2fd6889053c9e12a3d9d4afde49121a3c8a6fd267413b8"} Jan 31 16:42:03 crc kubenswrapper[4730]: I0131 16:42:03.735506 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xxzrr" event={"ID":"da3276e9-6b00-45e8-8db5-6bfc6f7f276f","Type":"ContainerStarted","Data":"ee27be2231a2a4d4275e2fe6a52035335f63ad5de3b5f877ad3abbacd0c25c85"} Jan 31 16:42:03 crc kubenswrapper[4730]: I0131 16:42:03.736531 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-xxzrr" Jan 31 16:42:03 crc kubenswrapper[4730]: I0131 16:42:03.757047 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-xxzrr" podStartSLOduration=3.757033251 podStartE2EDuration="3.757033251s" podCreationTimestamp="2026-01-31 16:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:42:03.75216836 +0000 UTC m=+710.558225276" watchObservedRunningTime="2026-01-31 16:42:03.757033251 +0000 UTC m=+710.563090167" Jan 31 16:42:10 crc kubenswrapper[4730]: I0131 16:42:10.781686 4730 generic.go:334] "Generic (PLEG): container finished" podID="6b47a859-3bb1-4179-9cc2-8274173a22d4" containerID="5be3c84972fd325b04076fc79079171b5c753446fd06e6e1cd1409b20c7a5df9" exitCode=0 Jan 31 16:42:10 crc kubenswrapper[4730]: I0131 16:42:10.782406 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b2bpp" event={"ID":"6b47a859-3bb1-4179-9cc2-8274173a22d4","Type":"ContainerDied","Data":"5be3c84972fd325b04076fc79079171b5c753446fd06e6e1cd1409b20c7a5df9"} Jan 31 16:42:10 crc kubenswrapper[4730]: I0131 16:42:10.787094 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8lbph" event={"ID":"129f61a1-e50c-4f81-a931-d9924c771c4f","Type":"ContainerStarted","Data":"06f4d55265019f9abe1a02461caa9581c5f31f87941a0487f348dd64ff2736a6"} Jan 31 16:42:10 crc kubenswrapper[4730]: I0131 16:42:10.787499 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8lbph" Jan 31 16:42:10 crc kubenswrapper[4730]: I0131 16:42:10.846137 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8lbph" podStartSLOduration=2.707261494 podStartE2EDuration="10.846114245s" podCreationTimestamp="2026-01-31 16:42:00 +0000 UTC" firstStartedPulling="2026-01-31 16:42:01.493588609 +0000 UTC m=+708.299645525" lastFinishedPulling="2026-01-31 16:42:09.63244136 +0000 UTC m=+716.438498276" observedRunningTime="2026-01-31 16:42:10.844435358 +0000 UTC m=+717.650492324" watchObservedRunningTime="2026-01-31 16:42:10.846114245 +0000 UTC m=+717.652171201" Jan 31 16:42:11 crc kubenswrapper[4730]: I0131 16:42:11.220946 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-zv6nq" Jan 31 16:42:11 crc kubenswrapper[4730]: I0131 16:42:11.795682 4730 generic.go:334] "Generic (PLEG): container finished" podID="6b47a859-3bb1-4179-9cc2-8274173a22d4" containerID="8e6260e2bcca13ebccc91dd17b85879ac41fff4ab6715ad2c0b1232935d75ded" exitCode=0 Jan 31 16:42:11 crc kubenswrapper[4730]: I0131 16:42:11.795878 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b2bpp" event={"ID":"6b47a859-3bb1-4179-9cc2-8274173a22d4","Type":"ContainerDied","Data":"8e6260e2bcca13ebccc91dd17b85879ac41fff4ab6715ad2c0b1232935d75ded"} Jan 31 16:42:12 crc kubenswrapper[4730]: I0131 16:42:12.697057 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-xxzrr" Jan 31 16:42:12 crc kubenswrapper[4730]: I0131 16:42:12.807167 4730 generic.go:334] "Generic (PLEG): container finished" podID="6b47a859-3bb1-4179-9cc2-8274173a22d4" containerID="a20b763c9f0e6cb1c1a0c71c7012733a0f843f0cf5e186fc238535dc0847bb74" exitCode=0 Jan 31 16:42:12 crc kubenswrapper[4730]: I0131 16:42:12.807299 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b2bpp" event={"ID":"6b47a859-3bb1-4179-9cc2-8274173a22d4","Type":"ContainerDied","Data":"a20b763c9f0e6cb1c1a0c71c7012733a0f843f0cf5e186fc238535dc0847bb74"} Jan 31 16:42:13 crc kubenswrapper[4730]: I0131 16:42:13.817422 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b2bpp" event={"ID":"6b47a859-3bb1-4179-9cc2-8274173a22d4","Type":"ContainerStarted","Data":"621b958708e18e5e50b270588c7079d56c55ecd5805a63218b0ef03c7c8f217c"} Jan 31 16:42:13 crc kubenswrapper[4730]: I0131 16:42:13.818826 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b2bpp" event={"ID":"6b47a859-3bb1-4179-9cc2-8274173a22d4","Type":"ContainerStarted","Data":"e6e65d456bf1ff19f8abff3dfc6ccd77ca8fcfc3f49b1ad9ceea01d44e884375"} Jan 31 16:42:13 crc kubenswrapper[4730]: I0131 16:42:13.818918 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b2bpp" event={"ID":"6b47a859-3bb1-4179-9cc2-8274173a22d4","Type":"ContainerStarted","Data":"92925ce0999d4b3485db3c7ed7c559ba37ea77b69d50399ba334a1edfc7c260f"} Jan 31 16:42:13 crc kubenswrapper[4730]: I0131 16:42:13.818989 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b2bpp" event={"ID":"6b47a859-3bb1-4179-9cc2-8274173a22d4","Type":"ContainerStarted","Data":"2df243ba23dcc1d10a291b9bd61b51731bf5f0c6959f0fe1f8a38ffb7201e061"} Jan 31 16:42:13 crc kubenswrapper[4730]: I0131 16:42:13.819069 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b2bpp" event={"ID":"6b47a859-3bb1-4179-9cc2-8274173a22d4","Type":"ContainerStarted","Data":"5dd9e02f37b2f5049c27ed3cdb064807d88f48c5f9c054dd2b0ca495a426a6e7"} Jan 31 16:42:14 crc kubenswrapper[4730]: I0131 16:42:14.875823 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b2bpp" event={"ID":"6b47a859-3bb1-4179-9cc2-8274173a22d4","Type":"ContainerStarted","Data":"4265e9a5b61806e757c63ed26427adfa0e0c4a2ad39d955235331c156a105d68"} Jan 31 16:42:14 crc kubenswrapper[4730]: I0131 16:42:14.876300 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:14 crc kubenswrapper[4730]: I0131 16:42:14.905064 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-b2bpp" podStartSLOduration=6.990863926 podStartE2EDuration="14.905047062s" podCreationTimestamp="2026-01-31 16:42:00 +0000 UTC" firstStartedPulling="2026-01-31 16:42:01.757261384 +0000 UTC m=+708.563318300" lastFinishedPulling="2026-01-31 16:42:09.67144451 +0000 UTC m=+716.477501436" observedRunningTime="2026-01-31 16:42:14.900117603 +0000 UTC m=+721.706174519" watchObservedRunningTime="2026-01-31 16:42:14.905047062 +0000 UTC m=+721.711103978" Jan 31 16:42:15 crc kubenswrapper[4730]: I0131 16:42:15.632233 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-d9n57"] Jan 31 16:42:15 crc kubenswrapper[4730]: I0131 16:42:15.633129 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d9n57" Jan 31 16:42:15 crc kubenswrapper[4730]: I0131 16:42:15.638233 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 31 16:42:15 crc kubenswrapper[4730]: I0131 16:42:15.639111 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-m265h" Jan 31 16:42:15 crc kubenswrapper[4730]: I0131 16:42:15.639305 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 31 16:42:15 crc kubenswrapper[4730]: I0131 16:42:15.655338 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd9d2\" (UniqueName: \"kubernetes.io/projected/8387bcf1-fa45-45b2-bb6d-b0da4820cdd7-kube-api-access-rd9d2\") pod \"openstack-operator-index-d9n57\" (UID: \"8387bcf1-fa45-45b2-bb6d-b0da4820cdd7\") " pod="openstack-operators/openstack-operator-index-d9n57" Jan 31 16:42:15 crc kubenswrapper[4730]: I0131 16:42:15.664928 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d9n57"] Jan 31 16:42:15 crc kubenswrapper[4730]: I0131 16:42:15.756445 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd9d2\" (UniqueName: \"kubernetes.io/projected/8387bcf1-fa45-45b2-bb6d-b0da4820cdd7-kube-api-access-rd9d2\") pod \"openstack-operator-index-d9n57\" (UID: \"8387bcf1-fa45-45b2-bb6d-b0da4820cdd7\") " pod="openstack-operators/openstack-operator-index-d9n57" Jan 31 16:42:15 crc kubenswrapper[4730]: I0131 16:42:15.775506 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd9d2\" (UniqueName: \"kubernetes.io/projected/8387bcf1-fa45-45b2-bb6d-b0da4820cdd7-kube-api-access-rd9d2\") pod \"openstack-operator-index-d9n57\" (UID: \"8387bcf1-fa45-45b2-bb6d-b0da4820cdd7\") " pod="openstack-operators/openstack-operator-index-d9n57" Jan 31 16:42:15 crc kubenswrapper[4730]: I0131 16:42:15.961347 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d9n57" Jan 31 16:42:16 crc kubenswrapper[4730]: I0131 16:42:16.427704 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d9n57"] Jan 31 16:42:16 crc kubenswrapper[4730]: I0131 16:42:16.659655 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:16 crc kubenswrapper[4730]: I0131 16:42:16.707669 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:16 crc kubenswrapper[4730]: I0131 16:42:16.891394 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d9n57" event={"ID":"8387bcf1-fa45-45b2-bb6d-b0da4820cdd7","Type":"ContainerStarted","Data":"3ce3a032ae1ff8ffa1dc0485f0210670462cec7da882ff6afdc7ffb835ab8d4b"} Jan 31 16:42:17 crc kubenswrapper[4730]: I0131 16:42:17.986774 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-d9n57"] Jan 31 16:42:18 crc kubenswrapper[4730]: I0131 16:42:18.396619 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-62wxs"] Jan 31 16:42:18 crc kubenswrapper[4730]: I0131 16:42:18.398205 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-62wxs" Jan 31 16:42:18 crc kubenswrapper[4730]: I0131 16:42:18.429999 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-62wxs"] Jan 31 16:42:18 crc kubenswrapper[4730]: I0131 16:42:18.592697 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krm9b\" (UniqueName: \"kubernetes.io/projected/74db53b1-8fee-4566-8280-8d5e4358ee93-kube-api-access-krm9b\") pod \"openstack-operator-index-62wxs\" (UID: \"74db53b1-8fee-4566-8280-8d5e4358ee93\") " pod="openstack-operators/openstack-operator-index-62wxs" Jan 31 16:42:18 crc kubenswrapper[4730]: I0131 16:42:18.694321 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krm9b\" (UniqueName: \"kubernetes.io/projected/74db53b1-8fee-4566-8280-8d5e4358ee93-kube-api-access-krm9b\") pod \"openstack-operator-index-62wxs\" (UID: \"74db53b1-8fee-4566-8280-8d5e4358ee93\") " pod="openstack-operators/openstack-operator-index-62wxs" Jan 31 16:42:18 crc kubenswrapper[4730]: I0131 16:42:18.715529 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krm9b\" (UniqueName: \"kubernetes.io/projected/74db53b1-8fee-4566-8280-8d5e4358ee93-kube-api-access-krm9b\") pod \"openstack-operator-index-62wxs\" (UID: \"74db53b1-8fee-4566-8280-8d5e4358ee93\") " pod="openstack-operators/openstack-operator-index-62wxs" Jan 31 16:42:18 crc kubenswrapper[4730]: I0131 16:42:18.782847 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-62wxs" Jan 31 16:42:18 crc kubenswrapper[4730]: I0131 16:42:18.914952 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d9n57" event={"ID":"8387bcf1-fa45-45b2-bb6d-b0da4820cdd7","Type":"ContainerStarted","Data":"5d348d3a1e2d43844244af0793188176a4839ea7fb28e5539f0267151c9894b9"} Jan 31 16:42:18 crc kubenswrapper[4730]: I0131 16:42:18.915070 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-d9n57" podUID="8387bcf1-fa45-45b2-bb6d-b0da4820cdd7" containerName="registry-server" containerID="cri-o://5d348d3a1e2d43844244af0793188176a4839ea7fb28e5539f0267151c9894b9" gracePeriod=2 Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.026947 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-d9n57" podStartSLOduration=2.091517092 podStartE2EDuration="4.026920104s" podCreationTimestamp="2026-01-31 16:42:15 +0000 UTC" firstStartedPulling="2026-01-31 16:42:16.434577682 +0000 UTC m=+723.240634608" lastFinishedPulling="2026-01-31 16:42:18.369980704 +0000 UTC m=+725.176037620" observedRunningTime="2026-01-31 16:42:18.933784128 +0000 UTC m=+725.739841044" watchObservedRunningTime="2026-01-31 16:42:19.026920104 +0000 UTC m=+725.832977040" Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.027704 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-62wxs"] Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.247761 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d9n57" Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.402788 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd9d2\" (UniqueName: \"kubernetes.io/projected/8387bcf1-fa45-45b2-bb6d-b0da4820cdd7-kube-api-access-rd9d2\") pod \"8387bcf1-fa45-45b2-bb6d-b0da4820cdd7\" (UID: \"8387bcf1-fa45-45b2-bb6d-b0da4820cdd7\") " Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.407321 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8387bcf1-fa45-45b2-bb6d-b0da4820cdd7-kube-api-access-rd9d2" (OuterVolumeSpecName: "kube-api-access-rd9d2") pod "8387bcf1-fa45-45b2-bb6d-b0da4820cdd7" (UID: "8387bcf1-fa45-45b2-bb6d-b0da4820cdd7"). InnerVolumeSpecName "kube-api-access-rd9d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.504614 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd9d2\" (UniqueName: \"kubernetes.io/projected/8387bcf1-fa45-45b2-bb6d-b0da4820cdd7-kube-api-access-rd9d2\") on node \"crc\" DevicePath \"\"" Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.925455 4730 generic.go:334] "Generic (PLEG): container finished" podID="8387bcf1-fa45-45b2-bb6d-b0da4820cdd7" containerID="5d348d3a1e2d43844244af0793188176a4839ea7fb28e5539f0267151c9894b9" exitCode=0 Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.925518 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d9n57" Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.925543 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d9n57" event={"ID":"8387bcf1-fa45-45b2-bb6d-b0da4820cdd7","Type":"ContainerDied","Data":"5d348d3a1e2d43844244af0793188176a4839ea7fb28e5539f0267151c9894b9"} Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.926598 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d9n57" event={"ID":"8387bcf1-fa45-45b2-bb6d-b0da4820cdd7","Type":"ContainerDied","Data":"3ce3a032ae1ff8ffa1dc0485f0210670462cec7da882ff6afdc7ffb835ab8d4b"} Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.926627 4730 scope.go:117] "RemoveContainer" containerID="5d348d3a1e2d43844244af0793188176a4839ea7fb28e5539f0267151c9894b9" Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.928704 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-62wxs" event={"ID":"74db53b1-8fee-4566-8280-8d5e4358ee93","Type":"ContainerStarted","Data":"4ab24367f612f59b05cd5f239a4bdc8921f1a40dd6ba75528ac2992f2c6c8f5f"} Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.928726 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-62wxs" event={"ID":"74db53b1-8fee-4566-8280-8d5e4358ee93","Type":"ContainerStarted","Data":"be85ede67b33a5ac835c8860144731740f735f69919bf0bd86118d891fad3156"} Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.962657 4730 scope.go:117] "RemoveContainer" containerID="5d348d3a1e2d43844244af0793188176a4839ea7fb28e5539f0267151c9894b9" Jan 31 16:42:19 crc kubenswrapper[4730]: E0131 16:42:19.963912 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d348d3a1e2d43844244af0793188176a4839ea7fb28e5539f0267151c9894b9\": container with ID starting with 5d348d3a1e2d43844244af0793188176a4839ea7fb28e5539f0267151c9894b9 not found: ID does not exist" containerID="5d348d3a1e2d43844244af0793188176a4839ea7fb28e5539f0267151c9894b9" Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.963965 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d348d3a1e2d43844244af0793188176a4839ea7fb28e5539f0267151c9894b9"} err="failed to get container status \"5d348d3a1e2d43844244af0793188176a4839ea7fb28e5539f0267151c9894b9\": rpc error: code = NotFound desc = could not find container \"5d348d3a1e2d43844244af0793188176a4839ea7fb28e5539f0267151c9894b9\": container with ID starting with 5d348d3a1e2d43844244af0793188176a4839ea7fb28e5539f0267151c9894b9 not found: ID does not exist" Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.977344 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-62wxs" podStartSLOduration=1.9231609600000001 podStartE2EDuration="1.977317277s" podCreationTimestamp="2026-01-31 16:42:18 +0000 UTC" firstStartedPulling="2026-01-31 16:42:19.04063024 +0000 UTC m=+725.846687166" lastFinishedPulling="2026-01-31 16:42:19.094786547 +0000 UTC m=+725.900843483" observedRunningTime="2026-01-31 16:42:19.949787401 +0000 UTC m=+726.755844347" watchObservedRunningTime="2026-01-31 16:42:19.977317277 +0000 UTC m=+726.783374223" Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.985081 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-d9n57"] Jan 31 16:42:19 crc kubenswrapper[4730]: I0131 16:42:19.992794 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-d9n57"] Jan 31 16:42:20 crc kubenswrapper[4730]: I0131 16:42:20.480796 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8387bcf1-fa45-45b2-bb6d-b0da4820cdd7" path="/var/lib/kubelet/pods/8387bcf1-fa45-45b2-bb6d-b0da4820cdd7/volumes" Jan 31 16:42:21 crc kubenswrapper[4730]: I0131 16:42:21.084771 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8lbph" Jan 31 16:42:28 crc kubenswrapper[4730]: I0131 16:42:28.783984 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-62wxs" Jan 31 16:42:28 crc kubenswrapper[4730]: I0131 16:42:28.784472 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-62wxs" Jan 31 16:42:28 crc kubenswrapper[4730]: I0131 16:42:28.816293 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-62wxs" Jan 31 16:42:29 crc kubenswrapper[4730]: I0131 16:42:29.031345 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-62wxs" Jan 31 16:42:30 crc kubenswrapper[4730]: I0131 16:42:30.049516 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc"] Jan 31 16:42:30 crc kubenswrapper[4730]: E0131 16:42:30.050238 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8387bcf1-fa45-45b2-bb6d-b0da4820cdd7" containerName="registry-server" Jan 31 16:42:30 crc kubenswrapper[4730]: I0131 16:42:30.050260 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8387bcf1-fa45-45b2-bb6d-b0da4820cdd7" containerName="registry-server" Jan 31 16:42:30 crc kubenswrapper[4730]: I0131 16:42:30.050473 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8387bcf1-fa45-45b2-bb6d-b0da4820cdd7" containerName="registry-server" Jan 31 16:42:30 crc kubenswrapper[4730]: I0131 16:42:30.051978 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" Jan 31 16:42:30 crc kubenswrapper[4730]: I0131 16:42:30.054292 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xnv7w" Jan 31 16:42:30 crc kubenswrapper[4730]: I0131 16:42:30.064991 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc"] Jan 31 16:42:30 crc kubenswrapper[4730]: I0131 16:42:30.171500 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9996fc15-d71e-46b8-8ad0-bebb587efa83-util\") pod \"be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc\" (UID: \"9996fc15-d71e-46b8-8ad0-bebb587efa83\") " pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" Jan 31 16:42:30 crc kubenswrapper[4730]: I0131 16:42:30.171737 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc8r8\" (UniqueName: \"kubernetes.io/projected/9996fc15-d71e-46b8-8ad0-bebb587efa83-kube-api-access-nc8r8\") pod \"be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc\" (UID: \"9996fc15-d71e-46b8-8ad0-bebb587efa83\") " pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" Jan 31 16:42:30 crc kubenswrapper[4730]: I0131 16:42:30.171861 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9996fc15-d71e-46b8-8ad0-bebb587efa83-bundle\") pod \"be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc\" (UID: \"9996fc15-d71e-46b8-8ad0-bebb587efa83\") " pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" Jan 31 16:42:30 crc kubenswrapper[4730]: I0131 16:42:30.273048 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9996fc15-d71e-46b8-8ad0-bebb587efa83-util\") pod \"be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc\" (UID: \"9996fc15-d71e-46b8-8ad0-bebb587efa83\") " pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" Jan 31 16:42:30 crc kubenswrapper[4730]: I0131 16:42:30.273197 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc8r8\" (UniqueName: \"kubernetes.io/projected/9996fc15-d71e-46b8-8ad0-bebb587efa83-kube-api-access-nc8r8\") pod \"be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc\" (UID: \"9996fc15-d71e-46b8-8ad0-bebb587efa83\") " pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" Jan 31 16:42:30 crc kubenswrapper[4730]: I0131 16:42:30.273240 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9996fc15-d71e-46b8-8ad0-bebb587efa83-bundle\") pod \"be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc\" (UID: \"9996fc15-d71e-46b8-8ad0-bebb587efa83\") " pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" Jan 31 16:42:30 crc kubenswrapper[4730]: I0131 16:42:30.273641 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9996fc15-d71e-46b8-8ad0-bebb587efa83-util\") pod \"be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc\" (UID: \"9996fc15-d71e-46b8-8ad0-bebb587efa83\") " pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" Jan 31 16:42:30 crc kubenswrapper[4730]: I0131 16:42:30.273917 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9996fc15-d71e-46b8-8ad0-bebb587efa83-bundle\") pod \"be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc\" (UID: \"9996fc15-d71e-46b8-8ad0-bebb587efa83\") " pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" Jan 31 16:42:30 crc kubenswrapper[4730]: I0131 16:42:30.298860 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc8r8\" (UniqueName: \"kubernetes.io/projected/9996fc15-d71e-46b8-8ad0-bebb587efa83-kube-api-access-nc8r8\") pod \"be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc\" (UID: \"9996fc15-d71e-46b8-8ad0-bebb587efa83\") " pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" Jan 31 16:42:30 crc kubenswrapper[4730]: I0131 16:42:30.396464 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" Jan 31 16:42:30 crc kubenswrapper[4730]: I0131 16:42:30.643987 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc"] Jan 31 16:42:30 crc kubenswrapper[4730]: W0131 16:42:30.647434 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9996fc15_d71e_46b8_8ad0_bebb587efa83.slice/crio-3067c01af0db47e3ce1a6dbadfab965a4a13b3866456fec0908e3162d11d740f WatchSource:0}: Error finding container 3067c01af0db47e3ce1a6dbadfab965a4a13b3866456fec0908e3162d11d740f: Status 404 returned error can't find the container with id 3067c01af0db47e3ce1a6dbadfab965a4a13b3866456fec0908e3162d11d740f Jan 31 16:42:31 crc kubenswrapper[4730]: I0131 16:42:31.014678 4730 generic.go:334] "Generic (PLEG): container finished" podID="9996fc15-d71e-46b8-8ad0-bebb587efa83" containerID="a60ec813cf14076215852767be4f084a17020f83651afe340ad81e95c1395637" exitCode=0 Jan 31 16:42:31 crc kubenswrapper[4730]: I0131 16:42:31.015011 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" event={"ID":"9996fc15-d71e-46b8-8ad0-bebb587efa83","Type":"ContainerDied","Data":"a60ec813cf14076215852767be4f084a17020f83651afe340ad81e95c1395637"} Jan 31 16:42:31 crc kubenswrapper[4730]: I0131 16:42:31.015039 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" event={"ID":"9996fc15-d71e-46b8-8ad0-bebb587efa83","Type":"ContainerStarted","Data":"3067c01af0db47e3ce1a6dbadfab965a4a13b3866456fec0908e3162d11d740f"} Jan 31 16:42:31 crc kubenswrapper[4730]: I0131 16:42:31.663354 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-b2bpp" Jan 31 16:42:32 crc kubenswrapper[4730]: I0131 16:42:32.028299 4730 generic.go:334] "Generic (PLEG): container finished" podID="9996fc15-d71e-46b8-8ad0-bebb587efa83" containerID="57fe5e00cbdf891bbcebae4d94d978aa98f9f199da827defcaa9f60b66166bfd" exitCode=0 Jan 31 16:42:32 crc kubenswrapper[4730]: I0131 16:42:32.028343 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" event={"ID":"9996fc15-d71e-46b8-8ad0-bebb587efa83","Type":"ContainerDied","Data":"57fe5e00cbdf891bbcebae4d94d978aa98f9f199da827defcaa9f60b66166bfd"} Jan 31 16:42:33 crc kubenswrapper[4730]: I0131 16:42:33.043060 4730 generic.go:334] "Generic (PLEG): container finished" podID="9996fc15-d71e-46b8-8ad0-bebb587efa83" containerID="d3f6ed7dc1d7755827e25e250acba9646501ad486ab7f8fcb3374e84f2608fc8" exitCode=0 Jan 31 16:42:33 crc kubenswrapper[4730]: I0131 16:42:33.043164 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" event={"ID":"9996fc15-d71e-46b8-8ad0-bebb587efa83","Type":"ContainerDied","Data":"d3f6ed7dc1d7755827e25e250acba9646501ad486ab7f8fcb3374e84f2608fc8"} Jan 31 16:42:34 crc kubenswrapper[4730]: I0131 16:42:34.371982 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" Jan 31 16:42:34 crc kubenswrapper[4730]: I0131 16:42:34.557939 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9996fc15-d71e-46b8-8ad0-bebb587efa83-bundle\") pod \"9996fc15-d71e-46b8-8ad0-bebb587efa83\" (UID: \"9996fc15-d71e-46b8-8ad0-bebb587efa83\") " Jan 31 16:42:34 crc kubenswrapper[4730]: I0131 16:42:34.558113 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9996fc15-d71e-46b8-8ad0-bebb587efa83-util\") pod \"9996fc15-d71e-46b8-8ad0-bebb587efa83\" (UID: \"9996fc15-d71e-46b8-8ad0-bebb587efa83\") " Jan 31 16:42:34 crc kubenswrapper[4730]: I0131 16:42:34.558182 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc8r8\" (UniqueName: \"kubernetes.io/projected/9996fc15-d71e-46b8-8ad0-bebb587efa83-kube-api-access-nc8r8\") pod \"9996fc15-d71e-46b8-8ad0-bebb587efa83\" (UID: \"9996fc15-d71e-46b8-8ad0-bebb587efa83\") " Jan 31 16:42:34 crc kubenswrapper[4730]: I0131 16:42:34.559023 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9996fc15-d71e-46b8-8ad0-bebb587efa83-bundle" (OuterVolumeSpecName: "bundle") pod "9996fc15-d71e-46b8-8ad0-bebb587efa83" (UID: "9996fc15-d71e-46b8-8ad0-bebb587efa83"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:42:34 crc kubenswrapper[4730]: I0131 16:42:34.569187 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9996fc15-d71e-46b8-8ad0-bebb587efa83-kube-api-access-nc8r8" (OuterVolumeSpecName: "kube-api-access-nc8r8") pod "9996fc15-d71e-46b8-8ad0-bebb587efa83" (UID: "9996fc15-d71e-46b8-8ad0-bebb587efa83"). InnerVolumeSpecName "kube-api-access-nc8r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:42:34 crc kubenswrapper[4730]: I0131 16:42:34.593003 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9996fc15-d71e-46b8-8ad0-bebb587efa83-util" (OuterVolumeSpecName: "util") pod "9996fc15-d71e-46b8-8ad0-bebb587efa83" (UID: "9996fc15-d71e-46b8-8ad0-bebb587efa83"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:42:34 crc kubenswrapper[4730]: I0131 16:42:34.661587 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc8r8\" (UniqueName: \"kubernetes.io/projected/9996fc15-d71e-46b8-8ad0-bebb587efa83-kube-api-access-nc8r8\") on node \"crc\" DevicePath \"\"" Jan 31 16:42:34 crc kubenswrapper[4730]: I0131 16:42:34.661635 4730 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9996fc15-d71e-46b8-8ad0-bebb587efa83-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:42:34 crc kubenswrapper[4730]: I0131 16:42:34.661652 4730 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9996fc15-d71e-46b8-8ad0-bebb587efa83-util\") on node \"crc\" DevicePath \"\"" Jan 31 16:42:35 crc kubenswrapper[4730]: I0131 16:42:35.063514 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" event={"ID":"9996fc15-d71e-46b8-8ad0-bebb587efa83","Type":"ContainerDied","Data":"3067c01af0db47e3ce1a6dbadfab965a4a13b3866456fec0908e3162d11d740f"} Jan 31 16:42:35 crc kubenswrapper[4730]: I0131 16:42:35.063555 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3067c01af0db47e3ce1a6dbadfab965a4a13b3866456fec0908e3162d11d740f" Jan 31 16:42:35 crc kubenswrapper[4730]: I0131 16:42:35.063573 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc" Jan 31 16:42:42 crc kubenswrapper[4730]: I0131 16:42:42.229939 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-567cf89b5c-4tqlg"] Jan 31 16:42:42 crc kubenswrapper[4730]: E0131 16:42:42.230611 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9996fc15-d71e-46b8-8ad0-bebb587efa83" containerName="pull" Jan 31 16:42:42 crc kubenswrapper[4730]: I0131 16:42:42.230625 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9996fc15-d71e-46b8-8ad0-bebb587efa83" containerName="pull" Jan 31 16:42:42 crc kubenswrapper[4730]: E0131 16:42:42.230652 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9996fc15-d71e-46b8-8ad0-bebb587efa83" containerName="util" Jan 31 16:42:42 crc kubenswrapper[4730]: I0131 16:42:42.230659 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9996fc15-d71e-46b8-8ad0-bebb587efa83" containerName="util" Jan 31 16:42:42 crc kubenswrapper[4730]: E0131 16:42:42.230670 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9996fc15-d71e-46b8-8ad0-bebb587efa83" containerName="extract" Jan 31 16:42:42 crc kubenswrapper[4730]: I0131 16:42:42.230679 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9996fc15-d71e-46b8-8ad0-bebb587efa83" containerName="extract" Jan 31 16:42:42 crc kubenswrapper[4730]: I0131 16:42:42.230839 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9996fc15-d71e-46b8-8ad0-bebb587efa83" containerName="extract" Jan 31 16:42:42 crc kubenswrapper[4730]: I0131 16:42:42.231249 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-567cf89b5c-4tqlg" Jan 31 16:42:42 crc kubenswrapper[4730]: I0131 16:42:42.235569 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-d7gmv" Jan 31 16:42:42 crc kubenswrapper[4730]: I0131 16:42:42.257090 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-567cf89b5c-4tqlg"] Jan 31 16:42:42 crc kubenswrapper[4730]: I0131 16:42:42.371494 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg78z\" (UniqueName: \"kubernetes.io/projected/11612ac7-b5f1-4c2f-ab71-2f7a455beedf-kube-api-access-jg78z\") pod \"openstack-operator-controller-init-567cf89b5c-4tqlg\" (UID: \"11612ac7-b5f1-4c2f-ab71-2f7a455beedf\") " pod="openstack-operators/openstack-operator-controller-init-567cf89b5c-4tqlg" Jan 31 16:42:42 crc kubenswrapper[4730]: I0131 16:42:42.473171 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg78z\" (UniqueName: \"kubernetes.io/projected/11612ac7-b5f1-4c2f-ab71-2f7a455beedf-kube-api-access-jg78z\") pod \"openstack-operator-controller-init-567cf89b5c-4tqlg\" (UID: \"11612ac7-b5f1-4c2f-ab71-2f7a455beedf\") " pod="openstack-operators/openstack-operator-controller-init-567cf89b5c-4tqlg" Jan 31 16:42:42 crc kubenswrapper[4730]: I0131 16:42:42.490671 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg78z\" (UniqueName: \"kubernetes.io/projected/11612ac7-b5f1-4c2f-ab71-2f7a455beedf-kube-api-access-jg78z\") pod \"openstack-operator-controller-init-567cf89b5c-4tqlg\" (UID: \"11612ac7-b5f1-4c2f-ab71-2f7a455beedf\") " pod="openstack-operators/openstack-operator-controller-init-567cf89b5c-4tqlg" Jan 31 16:42:42 crc kubenswrapper[4730]: I0131 16:42:42.550961 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-567cf89b5c-4tqlg" Jan 31 16:42:43 crc kubenswrapper[4730]: I0131 16:42:43.006060 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-567cf89b5c-4tqlg"] Jan 31 16:42:43 crc kubenswrapper[4730]: I0131 16:42:43.125615 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-567cf89b5c-4tqlg" event={"ID":"11612ac7-b5f1-4c2f-ab71-2f7a455beedf","Type":"ContainerStarted","Data":"ffd17739512e8c82ae8136ce2f682bab59effa54b1eeb55d1d57e82d80bb4005"} Jan 31 16:42:48 crc kubenswrapper[4730]: I0131 16:42:48.165420 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-567cf89b5c-4tqlg" event={"ID":"11612ac7-b5f1-4c2f-ab71-2f7a455beedf","Type":"ContainerStarted","Data":"ae4af3d869c9de2125d0278bdedf506d331fbe2318d0e25bca8c9796872fe0dd"} Jan 31 16:42:48 crc kubenswrapper[4730]: I0131 16:42:48.166014 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-567cf89b5c-4tqlg" Jan 31 16:42:48 crc kubenswrapper[4730]: I0131 16:42:48.207233 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-567cf89b5c-4tqlg" podStartSLOduration=1.89803878 podStartE2EDuration="6.207214071s" podCreationTimestamp="2026-01-31 16:42:42 +0000 UTC" firstStartedPulling="2026-01-31 16:42:43.028215278 +0000 UTC m=+749.834272184" lastFinishedPulling="2026-01-31 16:42:47.337390559 +0000 UTC m=+754.143447475" observedRunningTime="2026-01-31 16:42:48.20221248 +0000 UTC m=+755.008269416" watchObservedRunningTime="2026-01-31 16:42:48.207214071 +0000 UTC m=+755.013270997" Jan 31 16:42:52 crc kubenswrapper[4730]: I0131 16:42:52.553861 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-567cf89b5c-4tqlg" Jan 31 16:42:54 crc kubenswrapper[4730]: I0131 16:42:54.152639 4730 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.537544 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ktcvd"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.538608 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ktcvd" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.549526 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-js4wt" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.553939 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ktcvd"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.560777 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-bzkp6"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.561488 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bzkp6" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.566259 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-pzf4l" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.587791 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-bzkp6"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.604189 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-v5rrb"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.604935 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-v5rrb" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.608003 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-m7wv7" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.624137 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-v5rrb"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.659151 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-hmbg9"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.659867 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hmbg9" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.660769 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s7nj\" (UniqueName: \"kubernetes.io/projected/13990a08-64f5-47af-a6fb-59b6b547fe7f-kube-api-access-4s7nj\") pod \"cinder-operator-controller-manager-8d874c8fc-bzkp6\" (UID: \"13990a08-64f5-47af-a6fb-59b6b547fe7f\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bzkp6" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.660888 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjnsg\" (UniqueName: \"kubernetes.io/projected/1ecbf8bc-da38-4cc2-8d7e-eef855555957-kube-api-access-mjnsg\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-ktcvd\" (UID: \"1ecbf8bc-da38-4cc2-8d7e-eef855555957\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ktcvd" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.667204 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-6z2bb" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.668278 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-hmbg9"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.691260 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-pcvgw"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.691987 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pcvgw" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.694345 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-5dlhw" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.702032 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-w9r8d"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.702736 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-w9r8d" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.705624 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-9nx58" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.734967 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-w9r8d"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.744758 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-pcvgw"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.758450 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-89f56"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.759253 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.761702 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjnsg\" (UniqueName: \"kubernetes.io/projected/1ecbf8bc-da38-4cc2-8d7e-eef855555957-kube-api-access-mjnsg\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-ktcvd\" (UID: \"1ecbf8bc-da38-4cc2-8d7e-eef855555957\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ktcvd" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.761771 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc54j\" (UniqueName: \"kubernetes.io/projected/5d112f3e-564e-4003-90fe-6472c5643d40-kube-api-access-hc54j\") pod \"designate-operator-controller-manager-6d9697b7f4-v5rrb\" (UID: \"5d112f3e-564e-4003-90fe-6472c5643d40\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-v5rrb" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.761791 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncl62\" (UniqueName: \"kubernetes.io/projected/d13ce75a-a1e5-4a49-a46a-514b904c460a-kube-api-access-ncl62\") pod \"glance-operator-controller-manager-8886f4c47-hmbg9\" (UID: \"d13ce75a-a1e5-4a49-a46a-514b904c460a\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hmbg9" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.761848 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s7nj\" (UniqueName: \"kubernetes.io/projected/13990a08-64f5-47af-a6fb-59b6b547fe7f-kube-api-access-4s7nj\") pod \"cinder-operator-controller-manager-8d874c8fc-bzkp6\" (UID: \"13990a08-64f5-47af-a6fb-59b6b547fe7f\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bzkp6" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.762481 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.762616 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-wbvcr" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.767516 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vcgsr"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.768350 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vcgsr" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.770698 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-n7j6f" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.782668 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-89f56"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.800365 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vcgsr"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.814334 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s7nj\" (UniqueName: \"kubernetes.io/projected/13990a08-64f5-47af-a6fb-59b6b547fe7f-kube-api-access-4s7nj\") pod \"cinder-operator-controller-manager-8d874c8fc-bzkp6\" (UID: \"13990a08-64f5-47af-a6fb-59b6b547fe7f\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bzkp6" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.816465 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjnsg\" (UniqueName: \"kubernetes.io/projected/1ecbf8bc-da38-4cc2-8d7e-eef855555957-kube-api-access-mjnsg\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-ktcvd\" (UID: \"1ecbf8bc-da38-4cc2-8d7e-eef855555957\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ktcvd" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.824610 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-dl95k"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.825426 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-dl95k" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.837441 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-w2rqz" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.851873 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-4nshr"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.852697 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4nshr" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.853966 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-2wdpd" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.862630 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd9hb\" (UniqueName: \"kubernetes.io/projected/db806e61-96eb-4f21-9521-85c8cca3dbb6-kube-api-access-dd9hb\") pod \"heat-operator-controller-manager-69d6db494d-pcvgw\" (UID: \"db806e61-96eb-4f21-9521-85c8cca3dbb6\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pcvgw" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.862680 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert\") pod \"infra-operator-controller-manager-79955696d6-89f56\" (UID: \"58a9ca1b-4bc7-4912-ae16-3210ecea5790\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.862725 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc54j\" (UniqueName: \"kubernetes.io/projected/5d112f3e-564e-4003-90fe-6472c5643d40-kube-api-access-hc54j\") pod \"designate-operator-controller-manager-6d9697b7f4-v5rrb\" (UID: \"5d112f3e-564e-4003-90fe-6472c5643d40\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-v5rrb" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.862744 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncl62\" (UniqueName: \"kubernetes.io/projected/d13ce75a-a1e5-4a49-a46a-514b904c460a-kube-api-access-ncl62\") pod \"glance-operator-controller-manager-8886f4c47-hmbg9\" (UID: \"d13ce75a-a1e5-4a49-a46a-514b904c460a\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hmbg9" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.862765 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq44q\" (UniqueName: \"kubernetes.io/projected/b542fd94-b4bf-44af-8276-7d2e686f5bb4-kube-api-access-fq44q\") pod \"ironic-operator-controller-manager-5f4b8bd54d-vcgsr\" (UID: \"b542fd94-b4bf-44af-8276-7d2e686f5bb4\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vcgsr" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.862828 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncg8p\" (UniqueName: \"kubernetes.io/projected/58a9ca1b-4bc7-4912-ae16-3210ecea5790-kube-api-access-ncg8p\") pod \"infra-operator-controller-manager-79955696d6-89f56\" (UID: \"58a9ca1b-4bc7-4912-ae16-3210ecea5790\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.862846 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g928s\" (UniqueName: \"kubernetes.io/projected/58bb04d3-9031-43d5-b96f-0874d7ad4f79-kube-api-access-g928s\") pod \"horizon-operator-controller-manager-5fb775575f-w9r8d\" (UID: \"58bb04d3-9031-43d5-b96f-0874d7ad4f79\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-w9r8d" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.865853 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ktcvd" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.871478 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-dl95k"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.899026 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-4nshr"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.899365 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bzkp6" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.930871 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncl62\" (UniqueName: \"kubernetes.io/projected/d13ce75a-a1e5-4a49-a46a-514b904c460a-kube-api-access-ncl62\") pod \"glance-operator-controller-manager-8886f4c47-hmbg9\" (UID: \"d13ce75a-a1e5-4a49-a46a-514b904c460a\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hmbg9" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.932381 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc54j\" (UniqueName: \"kubernetes.io/projected/5d112f3e-564e-4003-90fe-6472c5643d40-kube-api-access-hc54j\") pod \"designate-operator-controller-manager-6d9697b7f4-v5rrb\" (UID: \"5d112f3e-564e-4003-90fe-6472c5643d40\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-v5rrb" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.964151 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncg8p\" (UniqueName: \"kubernetes.io/projected/58a9ca1b-4bc7-4912-ae16-3210ecea5790-kube-api-access-ncg8p\") pod \"infra-operator-controller-manager-79955696d6-89f56\" (UID: \"58a9ca1b-4bc7-4912-ae16-3210ecea5790\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.964186 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g928s\" (UniqueName: \"kubernetes.io/projected/58bb04d3-9031-43d5-b96f-0874d7ad4f79-kube-api-access-g928s\") pod \"horizon-operator-controller-manager-5fb775575f-w9r8d\" (UID: \"58bb04d3-9031-43d5-b96f-0874d7ad4f79\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-w9r8d" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.964214 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdt25\" (UniqueName: \"kubernetes.io/projected/4ffdcf38-ba5f-40c9-aef8-945d0c6bfbb4-kube-api-access-jdt25\") pod \"keystone-operator-controller-manager-84f48565d4-dl95k\" (UID: \"4ffdcf38-ba5f-40c9-aef8-945d0c6bfbb4\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-dl95k" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.964243 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd9hb\" (UniqueName: \"kubernetes.io/projected/db806e61-96eb-4f21-9521-85c8cca3dbb6-kube-api-access-dd9hb\") pod \"heat-operator-controller-manager-69d6db494d-pcvgw\" (UID: \"db806e61-96eb-4f21-9521-85c8cca3dbb6\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pcvgw" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.964271 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert\") pod \"infra-operator-controller-manager-79955696d6-89f56\" (UID: \"58a9ca1b-4bc7-4912-ae16-3210ecea5790\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.964297 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkfkc\" (UniqueName: \"kubernetes.io/projected/3cd35794-6a52-452b-9e7b-d1bb4f828dc1-kube-api-access-nkfkc\") pod \"manila-operator-controller-manager-7dd968899f-4nshr\" (UID: \"3cd35794-6a52-452b-9e7b-d1bb4f828dc1\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4nshr" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.964325 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq44q\" (UniqueName: \"kubernetes.io/projected/b542fd94-b4bf-44af-8276-7d2e686f5bb4-kube-api-access-fq44q\") pod \"ironic-operator-controller-manager-5f4b8bd54d-vcgsr\" (UID: \"b542fd94-b4bf-44af-8276-7d2e686f5bb4\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vcgsr" Jan 31 16:43:11 crc kubenswrapper[4730]: E0131 16:43:11.964959 4730 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 16:43:11 crc kubenswrapper[4730]: E0131 16:43:11.965000 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert podName:58a9ca1b-4bc7-4912-ae16-3210ecea5790 nodeName:}" failed. No retries permitted until 2026-01-31 16:43:12.464984318 +0000 UTC m=+779.271041234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert") pod "infra-operator-controller-manager-79955696d6-89f56" (UID: "58a9ca1b-4bc7-4912-ae16-3210ecea5790") : secret "infra-operator-webhook-server-cert" not found Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.970270 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-cwqb6"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.971051 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-cwqb6" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.981062 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hmbg9" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.983463 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-xf2gz" Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.987894 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-4x5l9"] Jan 31 16:43:11 crc kubenswrapper[4730]: I0131 16:43:11.989706 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4x5l9" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.010867 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-8b7c4" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.017412 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g928s\" (UniqueName: \"kubernetes.io/projected/58bb04d3-9031-43d5-b96f-0874d7ad4f79-kube-api-access-g928s\") pod \"horizon-operator-controller-manager-5fb775575f-w9r8d\" (UID: \"58bb04d3-9031-43d5-b96f-0874d7ad4f79\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-w9r8d" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.023253 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-w9r8d" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.023948 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd9hb\" (UniqueName: \"kubernetes.io/projected/db806e61-96eb-4f21-9521-85c8cca3dbb6-kube-api-access-dd9hb\") pod \"heat-operator-controller-manager-69d6db494d-pcvgw\" (UID: \"db806e61-96eb-4f21-9521-85c8cca3dbb6\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pcvgw" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.047659 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq44q\" (UniqueName: \"kubernetes.io/projected/b542fd94-b4bf-44af-8276-7d2e686f5bb4-kube-api-access-fq44q\") pod \"ironic-operator-controller-manager-5f4b8bd54d-vcgsr\" (UID: \"b542fd94-b4bf-44af-8276-7d2e686f5bb4\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vcgsr" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.048699 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-cwqb6"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.050460 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncg8p\" (UniqueName: \"kubernetes.io/projected/58a9ca1b-4bc7-4912-ae16-3210ecea5790-kube-api-access-ncg8p\") pod \"infra-operator-controller-manager-79955696d6-89f56\" (UID: \"58a9ca1b-4bc7-4912-ae16-3210ecea5790\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.065781 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdt25\" (UniqueName: \"kubernetes.io/projected/4ffdcf38-ba5f-40c9-aef8-945d0c6bfbb4-kube-api-access-jdt25\") pod \"keystone-operator-controller-manager-84f48565d4-dl95k\" (UID: \"4ffdcf38-ba5f-40c9-aef8-945d0c6bfbb4\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-dl95k" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.065932 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8nwq\" (UniqueName: \"kubernetes.io/projected/f87d7bd0-a9ff-48fc-991c-09dd2931d5bd-kube-api-access-d8nwq\") pod \"mariadb-operator-controller-manager-67bf948998-cwqb6\" (UID: \"f87d7bd0-a9ff-48fc-991c-09dd2931d5bd\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-cwqb6" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.066031 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkfkc\" (UniqueName: \"kubernetes.io/projected/3cd35794-6a52-452b-9e7b-d1bb4f828dc1-kube-api-access-nkfkc\") pod \"manila-operator-controller-manager-7dd968899f-4nshr\" (UID: \"3cd35794-6a52-452b-9e7b-d1bb4f828dc1\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4nshr" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.079129 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-87zjj"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.080774 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-87zjj" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.086632 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.087508 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.099357 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-dj2j6" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.099534 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.100238 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-p9n6d" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.115873 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-kdldq"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.116661 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kdldq" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.143446 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-hx6xc" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.167344 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vcgsr" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.175729 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8nwq\" (UniqueName: \"kubernetes.io/projected/f87d7bd0-a9ff-48fc-991c-09dd2931d5bd-kube-api-access-d8nwq\") pod \"mariadb-operator-controller-manager-67bf948998-cwqb6\" (UID: \"f87d7bd0-a9ff-48fc-991c-09dd2931d5bd\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-cwqb6" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.175872 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkct8\" (UniqueName: \"kubernetes.io/projected/0cfec67f-86ec-4246-9eef-53634c164730-kube-api-access-bkct8\") pod \"neutron-operator-controller-manager-585dbc889-4x5l9\" (UID: \"0cfec67f-86ec-4246-9eef-53634c164730\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4x5l9" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.175978 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbbhd\" (UniqueName: \"kubernetes.io/projected/113a73b1-4239-42e9-a168-704da54b2c56-kube-api-access-mbbhd\") pod \"octavia-operator-controller-manager-6687f8d877-87zjj\" (UID: \"113a73b1-4239-42e9-a168-704da54b2c56\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-87zjj" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.179838 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdt25\" (UniqueName: \"kubernetes.io/projected/4ffdcf38-ba5f-40c9-aef8-945d0c6bfbb4-kube-api-access-jdt25\") pod \"keystone-operator-controller-manager-84f48565d4-dl95k\" (UID: \"4ffdcf38-ba5f-40c9-aef8-945d0c6bfbb4\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-dl95k" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.225336 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-v5rrb" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.234554 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkfkc\" (UniqueName: \"kubernetes.io/projected/3cd35794-6a52-452b-9e7b-d1bb4f828dc1-kube-api-access-nkfkc\") pod \"manila-operator-controller-manager-7dd968899f-4nshr\" (UID: \"3cd35794-6a52-452b-9e7b-d1bb4f828dc1\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4nshr" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.246508 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8nwq\" (UniqueName: \"kubernetes.io/projected/f87d7bd0-a9ff-48fc-991c-09dd2931d5bd-kube-api-access-d8nwq\") pod \"mariadb-operator-controller-manager-67bf948998-cwqb6\" (UID: \"f87d7bd0-a9ff-48fc-991c-09dd2931d5bd\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-cwqb6" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.277563 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkct8\" (UniqueName: \"kubernetes.io/projected/0cfec67f-86ec-4246-9eef-53634c164730-kube-api-access-bkct8\") pod \"neutron-operator-controller-manager-585dbc889-4x5l9\" (UID: \"0cfec67f-86ec-4246-9eef-53634c164730\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4x5l9" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.277621 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn\" (UID: \"82fbb691-9ea3-473a-9bd7-22489bcfae0a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.277651 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbbhd\" (UniqueName: \"kubernetes.io/projected/113a73b1-4239-42e9-a168-704da54b2c56-kube-api-access-mbbhd\") pod \"octavia-operator-controller-manager-6687f8d877-87zjj\" (UID: \"113a73b1-4239-42e9-a168-704da54b2c56\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-87zjj" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.277693 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkf45\" (UniqueName: \"kubernetes.io/projected/7befb81f-95d7-4b23-a23d-2255e67528b0-kube-api-access-zkf45\") pod \"nova-operator-controller-manager-55bff696bd-kdldq\" (UID: \"7befb81f-95d7-4b23-a23d-2255e67528b0\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kdldq" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.277715 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbzqx\" (UniqueName: \"kubernetes.io/projected/82fbb691-9ea3-473a-9bd7-22489bcfae0a-kube-api-access-gbzqx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn\" (UID: \"82fbb691-9ea3-473a-9bd7-22489bcfae0a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.310550 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-4x5l9"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.311489 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pcvgw" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.314974 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbbhd\" (UniqueName: \"kubernetes.io/projected/113a73b1-4239-42e9-a168-704da54b2c56-kube-api-access-mbbhd\") pod \"octavia-operator-controller-manager-6687f8d877-87zjj\" (UID: \"113a73b1-4239-42e9-a168-704da54b2c56\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-87zjj" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.315982 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-cwqb6" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.332243 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-kdldq"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.332488 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkct8\" (UniqueName: \"kubernetes.io/projected/0cfec67f-86ec-4246-9eef-53634c164730-kube-api-access-bkct8\") pod \"neutron-operator-controller-manager-585dbc889-4x5l9\" (UID: \"0cfec67f-86ec-4246-9eef-53634c164730\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4x5l9" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.341539 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.351473 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-87zjj"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.363062 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-j58sp"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.364130 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-j58sp" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.377427 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-98279" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.382199 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn\" (UID: \"82fbb691-9ea3-473a-9bd7-22489bcfae0a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" Jan 31 16:43:12 crc kubenswrapper[4730]: E0131 16:43:12.382426 4730 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 16:43:12 crc kubenswrapper[4730]: E0131 16:43:12.382471 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert podName:82fbb691-9ea3-473a-9bd7-22489bcfae0a nodeName:}" failed. No retries permitted until 2026-01-31 16:43:12.882456476 +0000 UTC m=+779.688513392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" (UID: "82fbb691-9ea3-473a-9bd7-22489bcfae0a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.383339 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkf45\" (UniqueName: \"kubernetes.io/projected/7befb81f-95d7-4b23-a23d-2255e67528b0-kube-api-access-zkf45\") pod \"nova-operator-controller-manager-55bff696bd-kdldq\" (UID: \"7befb81f-95d7-4b23-a23d-2255e67528b0\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kdldq" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.383370 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbzqx\" (UniqueName: \"kubernetes.io/projected/82fbb691-9ea3-473a-9bd7-22489bcfae0a-kube-api-access-gbzqx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn\" (UID: \"82fbb691-9ea3-473a-9bd7-22489bcfae0a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.394639 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dk9lg"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.395597 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dk9lg" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.403232 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85df8f7b7c-krdxf"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.404059 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85df8f7b7c-krdxf" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.409221 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-xhjxd" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.419108 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-87zjj" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.424667 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-nhnb6" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.439854 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-j58sp"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.449655 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbzqx\" (UniqueName: \"kubernetes.io/projected/82fbb691-9ea3-473a-9bd7-22489bcfae0a-kube-api-access-gbzqx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn\" (UID: \"82fbb691-9ea3-473a-9bd7-22489bcfae0a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.452109 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zz8nq"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.453014 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zz8nq" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.455361 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-dl95k" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.457713 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkf45\" (UniqueName: \"kubernetes.io/projected/7befb81f-95d7-4b23-a23d-2255e67528b0-kube-api-access-zkf45\") pod \"nova-operator-controller-manager-55bff696bd-kdldq\" (UID: \"7befb81f-95d7-4b23-a23d-2255e67528b0\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kdldq" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.467690 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6dx4d" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.474667 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4nshr" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.484583 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dk9lg"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.485237 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llc86\" (UniqueName: \"kubernetes.io/projected/73250cb3-9b05-4102-b306-6c88d4881a23-kube-api-access-llc86\") pod \"placement-operator-controller-manager-5b964cf4cd-dk9lg\" (UID: \"73250cb3-9b05-4102-b306-6c88d4881a23\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dk9lg" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.485277 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert\") pod \"infra-operator-controller-manager-79955696d6-89f56\" (UID: \"58a9ca1b-4bc7-4912-ae16-3210ecea5790\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.485303 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bck8x\" (UniqueName: \"kubernetes.io/projected/b6911ed2-ca0f-4fed-b5c4-3046ac427b97-kube-api-access-bck8x\") pod \"swift-operator-controller-manager-85df8f7b7c-krdxf\" (UID: \"b6911ed2-ca0f-4fed-b5c4-3046ac427b97\") " pod="openstack-operators/swift-operator-controller-manager-85df8f7b7c-krdxf" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.485323 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szmgd\" (UniqueName: \"kubernetes.io/projected/ae26b53f-3174-4f96-9bc0-ea8be0ce6b72-kube-api-access-szmgd\") pod \"ovn-operator-controller-manager-788c46999f-j58sp\" (UID: \"ae26b53f-3174-4f96-9bc0-ea8be0ce6b72\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-j58sp" Jan 31 16:43:12 crc kubenswrapper[4730]: E0131 16:43:12.485432 4730 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 16:43:12 crc kubenswrapper[4730]: E0131 16:43:12.485471 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert podName:58a9ca1b-4bc7-4912-ae16-3210ecea5790 nodeName:}" failed. No retries permitted until 2026-01-31 16:43:13.48545603 +0000 UTC m=+780.291512946 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert") pod "infra-operator-controller-manager-79955696d6-89f56" (UID: "58a9ca1b-4bc7-4912-ae16-3210ecea5790") : secret "infra-operator-webhook-server-cert" not found Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.517298 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-cd9vd"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.522262 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-cd9vd" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.531897 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-std9x" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.547369 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85df8f7b7c-krdxf"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.563761 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zz8nq"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.586478 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bck8x\" (UniqueName: \"kubernetes.io/projected/b6911ed2-ca0f-4fed-b5c4-3046ac427b97-kube-api-access-bck8x\") pod \"swift-operator-controller-manager-85df8f7b7c-krdxf\" (UID: \"b6911ed2-ca0f-4fed-b5c4-3046ac427b97\") " pod="openstack-operators/swift-operator-controller-manager-85df8f7b7c-krdxf" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.586521 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szmgd\" (UniqueName: \"kubernetes.io/projected/ae26b53f-3174-4f96-9bc0-ea8be0ce6b72-kube-api-access-szmgd\") pod \"ovn-operator-controller-manager-788c46999f-j58sp\" (UID: \"ae26b53f-3174-4f96-9bc0-ea8be0ce6b72\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-j58sp" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.586548 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhccq\" (UniqueName: \"kubernetes.io/projected/e96a04a7-bf1d-4a9d-9cc4-5b193c22f7a5-kube-api-access-vhccq\") pod \"telemetry-operator-controller-manager-64b5b76f97-zz8nq\" (UID: \"e96a04a7-bf1d-4a9d-9cc4-5b193c22f7a5\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zz8nq" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.586632 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llc86\" (UniqueName: \"kubernetes.io/projected/73250cb3-9b05-4102-b306-6c88d4881a23-kube-api-access-llc86\") pod \"placement-operator-controller-manager-5b964cf4cd-dk9lg\" (UID: \"73250cb3-9b05-4102-b306-6c88d4881a23\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dk9lg" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.592920 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kdldq" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.602654 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-cd9vd"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.613450 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4x5l9" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.616922 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bck8x\" (UniqueName: \"kubernetes.io/projected/b6911ed2-ca0f-4fed-b5c4-3046ac427b97-kube-api-access-bck8x\") pod \"swift-operator-controller-manager-85df8f7b7c-krdxf\" (UID: \"b6911ed2-ca0f-4fed-b5c4-3046ac427b97\") " pod="openstack-operators/swift-operator-controller-manager-85df8f7b7c-krdxf" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.623796 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szmgd\" (UniqueName: \"kubernetes.io/projected/ae26b53f-3174-4f96-9bc0-ea8be0ce6b72-kube-api-access-szmgd\") pod \"ovn-operator-controller-manager-788c46999f-j58sp\" (UID: \"ae26b53f-3174-4f96-9bc0-ea8be0ce6b72\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-j58sp" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.628100 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llc86\" (UniqueName: \"kubernetes.io/projected/73250cb3-9b05-4102-b306-6c88d4881a23-kube-api-access-llc86\") pod \"placement-operator-controller-manager-5b964cf4cd-dk9lg\" (UID: \"73250cb3-9b05-4102-b306-6c88d4881a23\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dk9lg" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.652427 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-g28f6"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.653231 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-g28f6"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.653305 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-g28f6" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.665668 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-td88n" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.685162 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.686089 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.693154 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xhpg\" (UniqueName: \"kubernetes.io/projected/03b55837-5391-4dc0-88de-aa3b0893e733-kube-api-access-8xhpg\") pod \"test-operator-controller-manager-56f8bfcd9f-cd9vd\" (UID: \"03b55837-5391-4dc0-88de-aa3b0893e733\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-cd9vd" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.693193 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhccq\" (UniqueName: \"kubernetes.io/projected/e96a04a7-bf1d-4a9d-9cc4-5b193c22f7a5-kube-api-access-vhccq\") pod \"telemetry-operator-controller-manager-64b5b76f97-zz8nq\" (UID: \"e96a04a7-bf1d-4a9d-9cc4-5b193c22f7a5\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zz8nq" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.697735 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.697928 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.698110 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qjfmc" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.703112 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-j58sp" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.719368 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.722441 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhccq\" (UniqueName: \"kubernetes.io/projected/e96a04a7-bf1d-4a9d-9cc4-5b193c22f7a5-kube-api-access-vhccq\") pod \"telemetry-operator-controller-manager-64b5b76f97-zz8nq\" (UID: \"e96a04a7-bf1d-4a9d-9cc4-5b193c22f7a5\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zz8nq" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.742293 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dk9lg" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.778347 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85df8f7b7c-krdxf" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.794498 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7qps\" (UniqueName: \"kubernetes.io/projected/17116685-ca76-4a23-9b73-04cec9287254-kube-api-access-k7qps\") pod \"watcher-operator-controller-manager-564965969-g28f6\" (UID: \"17116685-ca76-4a23-9b73-04cec9287254\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-g28f6" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.794555 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhxkh\" (UniqueName: \"kubernetes.io/projected/e76dee4f-067c-436f-85c4-0c538a334973-kube-api-access-lhxkh\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.794586 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.794635 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xhpg\" (UniqueName: \"kubernetes.io/projected/03b55837-5391-4dc0-88de-aa3b0893e733-kube-api-access-8xhpg\") pod \"test-operator-controller-manager-56f8bfcd9f-cd9vd\" (UID: \"03b55837-5391-4dc0-88de-aa3b0893e733\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-cd9vd" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.794694 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.806843 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lj76z"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.807662 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lj76z" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.838157 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-56rkv" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.840967 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zz8nq" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.895976 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.896031 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7qps\" (UniqueName: \"kubernetes.io/projected/17116685-ca76-4a23-9b73-04cec9287254-kube-api-access-k7qps\") pod \"watcher-operator-controller-manager-564965969-g28f6\" (UID: \"17116685-ca76-4a23-9b73-04cec9287254\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-g28f6" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.896063 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2psm5\" (UniqueName: \"kubernetes.io/projected/37bb03aa-53be-43db-bcbc-5b0ea10eb72e-kube-api-access-2psm5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lj76z\" (UID: \"37bb03aa-53be-43db-bcbc-5b0ea10eb72e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lj76z" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.896098 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhxkh\" (UniqueName: \"kubernetes.io/projected/e76dee4f-067c-436f-85c4-0c538a334973-kube-api-access-lhxkh\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.896128 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:12 crc kubenswrapper[4730]: E0131 16:43:12.896181 4730 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.896203 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn\" (UID: \"82fbb691-9ea3-473a-9bd7-22489bcfae0a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" Jan 31 16:43:12 crc kubenswrapper[4730]: E0131 16:43:12.896287 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs podName:e76dee4f-067c-436f-85c4-0c538a334973 nodeName:}" failed. No retries permitted until 2026-01-31 16:43:13.396262011 +0000 UTC m=+780.202318927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs") pod "openstack-operator-controller-manager-5c77fbfdf8-th7sg" (UID: "e76dee4f-067c-436f-85c4-0c538a334973") : secret "metrics-server-cert" not found Jan 31 16:43:12 crc kubenswrapper[4730]: E0131 16:43:12.896346 4730 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 16:43:12 crc kubenswrapper[4730]: E0131 16:43:12.896401 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert podName:82fbb691-9ea3-473a-9bd7-22489bcfae0a nodeName:}" failed. No retries permitted until 2026-01-31 16:43:13.896382255 +0000 UTC m=+780.702439171 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" (UID: "82fbb691-9ea3-473a-9bd7-22489bcfae0a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 16:43:12 crc kubenswrapper[4730]: E0131 16:43:12.896610 4730 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 16:43:12 crc kubenswrapper[4730]: E0131 16:43:12.896647 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs podName:e76dee4f-067c-436f-85c4-0c538a334973 nodeName:}" failed. No retries permitted until 2026-01-31 16:43:13.396634752 +0000 UTC m=+780.202691668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs") pod "openstack-operator-controller-manager-5c77fbfdf8-th7sg" (UID: "e76dee4f-067c-436f-85c4-0c538a334973") : secret "webhook-server-cert" not found Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.907953 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xhpg\" (UniqueName: \"kubernetes.io/projected/03b55837-5391-4dc0-88de-aa3b0893e733-kube-api-access-8xhpg\") pod \"test-operator-controller-manager-56f8bfcd9f-cd9vd\" (UID: \"03b55837-5391-4dc0-88de-aa3b0893e733\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-cd9vd" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.940474 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhxkh\" (UniqueName: \"kubernetes.io/projected/e76dee4f-067c-436f-85c4-0c538a334973-kube-api-access-lhxkh\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.940706 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lj76z"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.948631 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7qps\" (UniqueName: \"kubernetes.io/projected/17116685-ca76-4a23-9b73-04cec9287254-kube-api-access-k7qps\") pod \"watcher-operator-controller-manager-564965969-g28f6\" (UID: \"17116685-ca76-4a23-9b73-04cec9287254\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-g28f6" Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.968599 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ktcvd"] Jan 31 16:43:12 crc kubenswrapper[4730]: I0131 16:43:12.998551 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2psm5\" (UniqueName: \"kubernetes.io/projected/37bb03aa-53be-43db-bcbc-5b0ea10eb72e-kube-api-access-2psm5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lj76z\" (UID: \"37bb03aa-53be-43db-bcbc-5b0ea10eb72e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lj76z" Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.004442 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-g28f6" Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.030971 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2psm5\" (UniqueName: \"kubernetes.io/projected/37bb03aa-53be-43db-bcbc-5b0ea10eb72e-kube-api-access-2psm5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lj76z\" (UID: \"37bb03aa-53be-43db-bcbc-5b0ea10eb72e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lj76z" Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.090633 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-hmbg9"] Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.106464 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-bzkp6"] Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.167321 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-cd9vd" Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.310555 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-w9r8d"] Jan 31 16:43:13 crc kubenswrapper[4730]: W0131 16:43:13.317178 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58bb04d3_9031_43d5_b96f_0874d7ad4f79.slice/crio-79e2b1bb370a484a32fe2d835cdfbbe144c4bb74e109263ff565196c616c30a9 WatchSource:0}: Error finding container 79e2b1bb370a484a32fe2d835cdfbbe144c4bb74e109263ff565196c616c30a9: Status 404 returned error can't find the container with id 79e2b1bb370a484a32fe2d835cdfbbe144c4bb74e109263ff565196c616c30a9 Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.330775 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lj76z" Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.352065 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hmbg9" event={"ID":"d13ce75a-a1e5-4a49-a46a-514b904c460a","Type":"ContainerStarted","Data":"a46336c9564c719f3b503fe0b50d363a18a39c1e7b656ebfb7ca4fe02d301c0e"} Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.353033 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bzkp6" event={"ID":"13990a08-64f5-47af-a6fb-59b6b547fe7f","Type":"ContainerStarted","Data":"8676c10587a8249179460cae511999d03004fb7a20a6ded3032be6e53d2301d1"} Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.353770 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ktcvd" event={"ID":"1ecbf8bc-da38-4cc2-8d7e-eef855555957","Type":"ContainerStarted","Data":"64d4fb231192711fe7183be0b58de99d8632db1ac95c2cd83a238c7c9164761c"} Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.354642 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-w9r8d" event={"ID":"58bb04d3-9031-43d5-b96f-0874d7ad4f79","Type":"ContainerStarted","Data":"79e2b1bb370a484a32fe2d835cdfbbe144c4bb74e109263ff565196c616c30a9"} Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.408440 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.408521 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.408634 4730 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.408681 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs podName:e76dee4f-067c-436f-85c4-0c538a334973 nodeName:}" failed. No retries permitted until 2026-01-31 16:43:14.408666447 +0000 UTC m=+781.214723363 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs") pod "openstack-operator-controller-manager-5c77fbfdf8-th7sg" (UID: "e76dee4f-067c-436f-85c4-0c538a334973") : secret "webhook-server-cert" not found Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.408688 4730 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.408762 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs podName:e76dee4f-067c-436f-85c4-0c538a334973 nodeName:}" failed. No retries permitted until 2026-01-31 16:43:14.408742799 +0000 UTC m=+781.214799715 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs") pod "openstack-operator-controller-manager-5c77fbfdf8-th7sg" (UID: "e76dee4f-067c-436f-85c4-0c538a334973") : secret "metrics-server-cert" not found Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.500139 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-cwqb6"] Jan 31 16:43:13 crc kubenswrapper[4730]: W0131 16:43:13.508170 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf87d7bd0_a9ff_48fc_991c_09dd2931d5bd.slice/crio-5ecc48d043fb3e6fc49583912170c6ac207bfbb2a7d4ca8686f22948206cec7d WatchSource:0}: Error finding container 5ecc48d043fb3e6fc49583912170c6ac207bfbb2a7d4ca8686f22948206cec7d: Status 404 returned error can't find the container with id 5ecc48d043fb3e6fc49583912170c6ac207bfbb2a7d4ca8686f22948206cec7d Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.510866 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert\") pod \"infra-operator-controller-manager-79955696d6-89f56\" (UID: \"58a9ca1b-4bc7-4912-ae16-3210ecea5790\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.511618 4730 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.511657 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert podName:58a9ca1b-4bc7-4912-ae16-3210ecea5790 nodeName:}" failed. No retries permitted until 2026-01-31 16:43:15.51164368 +0000 UTC m=+782.317700596 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert") pod "infra-operator-controller-manager-79955696d6-89f56" (UID: "58a9ca1b-4bc7-4912-ae16-3210ecea5790") : secret "infra-operator-webhook-server-cert" not found Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.522923 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-87zjj"] Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.536002 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-dl95k"] Jan 31 16:43:13 crc kubenswrapper[4730]: W0131 16:43:13.538585 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb542fd94_b4bf_44af_8276_7d2e686f5bb4.slice/crio-9c1e4dea1cdc2f46325c14c29ff425ae588430e273c42293e4c1c53f8b49ccdd WatchSource:0}: Error finding container 9c1e4dea1cdc2f46325c14c29ff425ae588430e273c42293e4c1c53f8b49ccdd: Status 404 returned error can't find the container with id 9c1e4dea1cdc2f46325c14c29ff425ae588430e273c42293e4c1c53f8b49ccdd Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.543278 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vcgsr"] Jan 31 16:43:13 crc kubenswrapper[4730]: W0131 16:43:13.550517 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb806e61_96eb_4f21_9521_85c8cca3dbb6.slice/crio-f10c13b61371cc994ab44ac2c9f6a7040559ea6b3e2a617af3a93c6cd5707467 WatchSource:0}: Error finding container f10c13b61371cc994ab44ac2c9f6a7040559ea6b3e2a617af3a93c6cd5707467: Status 404 returned error can't find the container with id f10c13b61371cc994ab44ac2c9f6a7040559ea6b3e2a617af3a93c6cd5707467 Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.552750 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-pcvgw"] Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.561224 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-v5rrb"] Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.727492 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zz8nq"] Jan 31 16:43:13 crc kubenswrapper[4730]: W0131 16:43:13.743092 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode96a04a7_bf1d_4a9d_9cc4_5b193c22f7a5.slice/crio-14e85fddf9c4efa3137ecd70e0108784de4d9e0463a0cd41185a6fd0be4b0e72 WatchSource:0}: Error finding container 14e85fddf9c4efa3137ecd70e0108784de4d9e0463a0cd41185a6fd0be4b0e72: Status 404 returned error can't find the container with id 14e85fddf9c4efa3137ecd70e0108784de4d9e0463a0cd41185a6fd0be4b0e72 Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.747928 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-4nshr"] Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.757489 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-kdldq"] Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.763594 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85df8f7b7c-krdxf"] Jan 31 16:43:13 crc kubenswrapper[4730]: W0131 16:43:13.763843 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7befb81f_95d7_4b23_a23d_2255e67528b0.slice/crio-e18c31fcce5ea036535104a906a96e19aec0b3145732052246921a7b2a82ab3f WatchSource:0}: Error finding container e18c31fcce5ea036535104a906a96e19aec0b3145732052246921a7b2a82ab3f: Status 404 returned error can't find the container with id e18c31fcce5ea036535104a906a96e19aec0b3145732052246921a7b2a82ab3f Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.771363 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-j58sp"] Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.774969 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-4x5l9"] Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.775442 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-szmgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-j58sp_openstack-operators(ae26b53f-3174-4f96-9bc0-ea8be0ce6b72): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.777323 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-j58sp" podUID="ae26b53f-3174-4f96-9bc0-ea8be0ce6b72" Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.778465 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bkct8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-4x5l9_openstack-operators(0cfec67f-86ec-4246-9eef-53634c164730): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.779972 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4x5l9" podUID="0cfec67f-86ec-4246-9eef-53634c164730" Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.923766 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lj76z"] Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.925924 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn\" (UID: \"82fbb691-9ea3-473a-9bd7-22489bcfae0a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.926107 4730 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.926193 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert podName:82fbb691-9ea3-473a-9bd7-22489bcfae0a nodeName:}" failed. No retries permitted until 2026-01-31 16:43:15.926172536 +0000 UTC m=+782.732229452 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" (UID: "82fbb691-9ea3-473a-9bd7-22489bcfae0a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.929864 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-g28f6"] Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.933397 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dk9lg"] Jan 31 16:43:13 crc kubenswrapper[4730]: I0131 16:43:13.936790 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-cd9vd"] Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.937555 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k7qps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-g28f6_openstack-operators(17116685-ca76-4a23-9b73-04cec9287254): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.938622 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-g28f6" podUID="17116685-ca76-4a23-9b73-04cec9287254" Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.938686 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2psm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-lj76z_openstack-operators(37bb03aa-53be-43db-bcbc-5b0ea10eb72e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 16:43:13 crc kubenswrapper[4730]: W0131 16:43:13.939621 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73250cb3_9b05_4102_b306_6c88d4881a23.slice/crio-1f913f1e6379ea0f57bef278ec3fe9299a71a8466898e4af474b3708c15b12a2 WatchSource:0}: Error finding container 1f913f1e6379ea0f57bef278ec3fe9299a71a8466898e4af474b3708c15b12a2: Status 404 returned error can't find the container with id 1f913f1e6379ea0f57bef278ec3fe9299a71a8466898e4af474b3708c15b12a2 Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.941768 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lj76z" podUID="37bb03aa-53be-43db-bcbc-5b0ea10eb72e" Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.943723 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-llc86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-dk9lg_openstack-operators(73250cb3-9b05-4102-b306-6c88d4881a23): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 16:43:13 crc kubenswrapper[4730]: E0131 16:43:13.944859 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dk9lg" podUID="73250cb3-9b05-4102-b306-6c88d4881a23" Jan 31 16:43:13 crc kubenswrapper[4730]: W0131 16:43:13.960831 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03b55837_5391_4dc0_88de_aa3b0893e733.slice/crio-bab7bca9cf2a8f0a070e1b782fb5c2f31b0201d98f3971897d9719827cab9c00 WatchSource:0}: Error finding container bab7bca9cf2a8f0a070e1b782fb5c2f31b0201d98f3971897d9719827cab9c00: Status 404 returned error can't find the container with id bab7bca9cf2a8f0a070e1b782fb5c2f31b0201d98f3971897d9719827cab9c00 Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.362457 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-87zjj" event={"ID":"113a73b1-4239-42e9-a168-704da54b2c56","Type":"ContainerStarted","Data":"ec30be80d88504e7095f307e4d249f0c19cc355dba136f0c488ec63b49f7415e"} Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.363851 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-j58sp" event={"ID":"ae26b53f-3174-4f96-9bc0-ea8be0ce6b72","Type":"ContainerStarted","Data":"8165e518c3f40477eb64e1a7966e71048bac830c08ea1f9308009714bf0a2175"} Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.365851 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-dl95k" event={"ID":"4ffdcf38-ba5f-40c9-aef8-945d0c6bfbb4","Type":"ContainerStarted","Data":"74069de118afa4b0e7ee9f1501f6883bca9e7d6cf3e9a033b44a33da7024651e"} Jan 31 16:43:14 crc kubenswrapper[4730]: E0131 16:43:14.368604 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-j58sp" podUID="ae26b53f-3174-4f96-9bc0-ea8be0ce6b72" Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.369200 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-cwqb6" event={"ID":"f87d7bd0-a9ff-48fc-991c-09dd2931d5bd","Type":"ContainerStarted","Data":"5ecc48d043fb3e6fc49583912170c6ac207bfbb2a7d4ca8686f22948206cec7d"} Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.372162 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-v5rrb" event={"ID":"5d112f3e-564e-4003-90fe-6472c5643d40","Type":"ContainerStarted","Data":"49b237f65d09e4ac16970b7ec94881a6bbd294883a3def7afe3a005b5f71214d"} Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.373694 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pcvgw" event={"ID":"db806e61-96eb-4f21-9521-85c8cca3dbb6","Type":"ContainerStarted","Data":"f10c13b61371cc994ab44ac2c9f6a7040559ea6b3e2a617af3a93c6cd5707467"} Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.374873 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-cd9vd" event={"ID":"03b55837-5391-4dc0-88de-aa3b0893e733","Type":"ContainerStarted","Data":"bab7bca9cf2a8f0a070e1b782fb5c2f31b0201d98f3971897d9719827cab9c00"} Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.376477 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lj76z" event={"ID":"37bb03aa-53be-43db-bcbc-5b0ea10eb72e","Type":"ContainerStarted","Data":"9e3366971ea2d1443511800244ea32b597b30bfd0f37699cd8f7ffb5d3dba703"} Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.384495 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-g28f6" event={"ID":"17116685-ca76-4a23-9b73-04cec9287254","Type":"ContainerStarted","Data":"cbc2ee364928c221d5d5fadb89d5425354b8cbbcbf16183e586272b1d3bffa2c"} Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.386517 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kdldq" event={"ID":"7befb81f-95d7-4b23-a23d-2255e67528b0","Type":"ContainerStarted","Data":"e18c31fcce5ea036535104a906a96e19aec0b3145732052246921a7b2a82ab3f"} Jan 31 16:43:14 crc kubenswrapper[4730]: E0131 16:43:14.387968 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lj76z" podUID="37bb03aa-53be-43db-bcbc-5b0ea10eb72e" Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.393481 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4nshr" event={"ID":"3cd35794-6a52-452b-9e7b-d1bb4f828dc1","Type":"ContainerStarted","Data":"e9489470017090b09f1346a97d7ea700336564fd5d3a54596ed69f465a97b72a"} Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.396736 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vcgsr" event={"ID":"b542fd94-b4bf-44af-8276-7d2e686f5bb4","Type":"ContainerStarted","Data":"9c1e4dea1cdc2f46325c14c29ff425ae588430e273c42293e4c1c53f8b49ccdd"} Jan 31 16:43:14 crc kubenswrapper[4730]: E0131 16:43:14.407487 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-g28f6" podUID="17116685-ca76-4a23-9b73-04cec9287254" Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.411557 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4x5l9" event={"ID":"0cfec67f-86ec-4246-9eef-53634c164730","Type":"ContainerStarted","Data":"d65a5ec7cc456e322ddc8d124ecc53f35d4bf567856ad4e8c89298db97ffd440"} Jan 31 16:43:14 crc kubenswrapper[4730]: E0131 16:43:14.413930 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4x5l9" podUID="0cfec67f-86ec-4246-9eef-53634c164730" Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.416395 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85df8f7b7c-krdxf" event={"ID":"b6911ed2-ca0f-4fed-b5c4-3046ac427b97","Type":"ContainerStarted","Data":"b480b1916ae33e13c68f6c2c5a1ba5507c4b09a5d39a6057f435b1e9873c390c"} Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.420111 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dk9lg" event={"ID":"73250cb3-9b05-4102-b306-6c88d4881a23","Type":"ContainerStarted","Data":"1f913f1e6379ea0f57bef278ec3fe9299a71a8466898e4af474b3708c15b12a2"} Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.427728 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zz8nq" event={"ID":"e96a04a7-bf1d-4a9d-9cc4-5b193c22f7a5","Type":"ContainerStarted","Data":"14e85fddf9c4efa3137ecd70e0108784de4d9e0463a0cd41185a6fd0be4b0e72"} Jan 31 16:43:14 crc kubenswrapper[4730]: E0131 16:43:14.430731 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dk9lg" podUID="73250cb3-9b05-4102-b306-6c88d4881a23" Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.433231 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:14 crc kubenswrapper[4730]: I0131 16:43:14.433329 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:14 crc kubenswrapper[4730]: E0131 16:43:14.433891 4730 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 16:43:14 crc kubenswrapper[4730]: E0131 16:43:14.433931 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs podName:e76dee4f-067c-436f-85c4-0c538a334973 nodeName:}" failed. No retries permitted until 2026-01-31 16:43:16.43391667 +0000 UTC m=+783.239973576 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs") pod "openstack-operator-controller-manager-5c77fbfdf8-th7sg" (UID: "e76dee4f-067c-436f-85c4-0c538a334973") : secret "metrics-server-cert" not found Jan 31 16:43:14 crc kubenswrapper[4730]: E0131 16:43:14.434024 4730 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 16:43:14 crc kubenswrapper[4730]: E0131 16:43:14.434087 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs podName:e76dee4f-067c-436f-85c4-0c538a334973 nodeName:}" failed. No retries permitted until 2026-01-31 16:43:16.434070635 +0000 UTC m=+783.240127551 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs") pod "openstack-operator-controller-manager-5c77fbfdf8-th7sg" (UID: "e76dee4f-067c-436f-85c4-0c538a334973") : secret "webhook-server-cert" not found Jan 31 16:43:15 crc kubenswrapper[4730]: E0131 16:43:15.458187 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4x5l9" podUID="0cfec67f-86ec-4246-9eef-53634c164730" Jan 31 16:43:15 crc kubenswrapper[4730]: E0131 16:43:15.458590 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-j58sp" podUID="ae26b53f-3174-4f96-9bc0-ea8be0ce6b72" Jan 31 16:43:15 crc kubenswrapper[4730]: E0131 16:43:15.459707 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lj76z" podUID="37bb03aa-53be-43db-bcbc-5b0ea10eb72e" Jan 31 16:43:15 crc kubenswrapper[4730]: E0131 16:43:15.460595 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-g28f6" podUID="17116685-ca76-4a23-9b73-04cec9287254" Jan 31 16:43:15 crc kubenswrapper[4730]: E0131 16:43:15.460907 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dk9lg" podUID="73250cb3-9b05-4102-b306-6c88d4881a23" Jan 31 16:43:15 crc kubenswrapper[4730]: I0131 16:43:15.607844 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert\") pod \"infra-operator-controller-manager-79955696d6-89f56\" (UID: \"58a9ca1b-4bc7-4912-ae16-3210ecea5790\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" Jan 31 16:43:15 crc kubenswrapper[4730]: E0131 16:43:15.608058 4730 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 16:43:15 crc kubenswrapper[4730]: E0131 16:43:15.608133 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert podName:58a9ca1b-4bc7-4912-ae16-3210ecea5790 nodeName:}" failed. No retries permitted until 2026-01-31 16:43:19.608113643 +0000 UTC m=+786.414170559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert") pod "infra-operator-controller-manager-79955696d6-89f56" (UID: "58a9ca1b-4bc7-4912-ae16-3210ecea5790") : secret "infra-operator-webhook-server-cert" not found Jan 31 16:43:16 crc kubenswrapper[4730]: I0131 16:43:16.020670 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn\" (UID: \"82fbb691-9ea3-473a-9bd7-22489bcfae0a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" Jan 31 16:43:16 crc kubenswrapper[4730]: E0131 16:43:16.020874 4730 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 16:43:16 crc kubenswrapper[4730]: E0131 16:43:16.020952 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert podName:82fbb691-9ea3-473a-9bd7-22489bcfae0a nodeName:}" failed. No retries permitted until 2026-01-31 16:43:20.02093218 +0000 UTC m=+786.826989096 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" (UID: "82fbb691-9ea3-473a-9bd7-22489bcfae0a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 16:43:16 crc kubenswrapper[4730]: I0131 16:43:16.529341 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:16 crc kubenswrapper[4730]: I0131 16:43:16.529497 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:16 crc kubenswrapper[4730]: E0131 16:43:16.530870 4730 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 16:43:16 crc kubenswrapper[4730]: E0131 16:43:16.530930 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs podName:e76dee4f-067c-436f-85c4-0c538a334973 nodeName:}" failed. No retries permitted until 2026-01-31 16:43:20.530903907 +0000 UTC m=+787.336960823 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs") pod "openstack-operator-controller-manager-5c77fbfdf8-th7sg" (UID: "e76dee4f-067c-436f-85c4-0c538a334973") : secret "metrics-server-cert" not found Jan 31 16:43:16 crc kubenswrapper[4730]: E0131 16:43:16.532299 4730 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 16:43:16 crc kubenswrapper[4730]: E0131 16:43:16.532326 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs podName:e76dee4f-067c-436f-85c4-0c538a334973 nodeName:}" failed. No retries permitted until 2026-01-31 16:43:20.532317107 +0000 UTC m=+787.338374023 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs") pod "openstack-operator-controller-manager-5c77fbfdf8-th7sg" (UID: "e76dee4f-067c-436f-85c4-0c538a334973") : secret "webhook-server-cert" not found Jan 31 16:43:19 crc kubenswrapper[4730]: I0131 16:43:19.705348 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert\") pod \"infra-operator-controller-manager-79955696d6-89f56\" (UID: \"58a9ca1b-4bc7-4912-ae16-3210ecea5790\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" Jan 31 16:43:19 crc kubenswrapper[4730]: E0131 16:43:19.705820 4730 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 16:43:19 crc kubenswrapper[4730]: E0131 16:43:19.705869 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert podName:58a9ca1b-4bc7-4912-ae16-3210ecea5790 nodeName:}" failed. No retries permitted until 2026-01-31 16:43:27.705853363 +0000 UTC m=+794.511910279 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert") pod "infra-operator-controller-manager-79955696d6-89f56" (UID: "58a9ca1b-4bc7-4912-ae16-3210ecea5790") : secret "infra-operator-webhook-server-cert" not found Jan 31 16:43:20 crc kubenswrapper[4730]: I0131 16:43:20.111704 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn\" (UID: \"82fbb691-9ea3-473a-9bd7-22489bcfae0a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" Jan 31 16:43:20 crc kubenswrapper[4730]: E0131 16:43:20.111933 4730 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 16:43:20 crc kubenswrapper[4730]: E0131 16:43:20.112029 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert podName:82fbb691-9ea3-473a-9bd7-22489bcfae0a nodeName:}" failed. No retries permitted until 2026-01-31 16:43:28.112005734 +0000 UTC m=+794.918062660 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" (UID: "82fbb691-9ea3-473a-9bd7-22489bcfae0a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 16:43:20 crc kubenswrapper[4730]: I0131 16:43:20.619971 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:20 crc kubenswrapper[4730]: E0131 16:43:20.620196 4730 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 16:43:20 crc kubenswrapper[4730]: I0131 16:43:20.620527 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:20 crc kubenswrapper[4730]: E0131 16:43:20.620656 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs podName:e76dee4f-067c-436f-85c4-0c538a334973 nodeName:}" failed. No retries permitted until 2026-01-31 16:43:28.620556 +0000 UTC m=+795.426612956 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs") pod "openstack-operator-controller-manager-5c77fbfdf8-th7sg" (UID: "e76dee4f-067c-436f-85c4-0c538a334973") : secret "webhook-server-cert" not found Jan 31 16:43:20 crc kubenswrapper[4730]: E0131 16:43:20.620736 4730 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 16:43:20 crc kubenswrapper[4730]: E0131 16:43:20.620949 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs podName:e76dee4f-067c-436f-85c4-0c538a334973 nodeName:}" failed. No retries permitted until 2026-01-31 16:43:28.620926641 +0000 UTC m=+795.426983587 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs") pod "openstack-operator-controller-manager-5c77fbfdf8-th7sg" (UID: "e76dee4f-067c-436f-85c4-0c538a334973") : secret "metrics-server-cert" not found Jan 31 16:43:26 crc kubenswrapper[4730]: I0131 16:43:26.975264 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:43:26 crc kubenswrapper[4730]: I0131 16:43:26.975724 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:43:26 crc kubenswrapper[4730]: E0131 16:43:26.999577 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be" Jan 31 16:43:27 crc kubenswrapper[4730]: E0131 16:43:26.999928 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mbbhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-87zjj_openstack-operators(113a73b1-4239-42e9-a168-704da54b2c56): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:43:27 crc kubenswrapper[4730]: E0131 16:43:27.001309 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-87zjj" podUID="113a73b1-4239-42e9-a168-704da54b2c56" Jan 31 16:43:27 crc kubenswrapper[4730]: E0131 16:43:27.546524 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-87zjj" podUID="113a73b1-4239-42e9-a168-704da54b2c56" Jan 31 16:43:27 crc kubenswrapper[4730]: E0131 16:43:27.721978 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 31 16:43:27 crc kubenswrapper[4730]: E0131 16:43:27.722385 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d8nwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-cwqb6_openstack-operators(f87d7bd0-a9ff-48fc-991c-09dd2931d5bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:43:27 crc kubenswrapper[4730]: E0131 16:43:27.723916 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-cwqb6" podUID="f87d7bd0-a9ff-48fc-991c-09dd2931d5bd" Jan 31 16:43:27 crc kubenswrapper[4730]: I0131 16:43:27.746484 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert\") pod \"infra-operator-controller-manager-79955696d6-89f56\" (UID: \"58a9ca1b-4bc7-4912-ae16-3210ecea5790\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" Jan 31 16:43:27 crc kubenswrapper[4730]: I0131 16:43:27.753279 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a9ca1b-4bc7-4912-ae16-3210ecea5790-cert\") pod \"infra-operator-controller-manager-79955696d6-89f56\" (UID: \"58a9ca1b-4bc7-4912-ae16-3210ecea5790\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" Jan 31 16:43:27 crc kubenswrapper[4730]: I0131 16:43:27.988737 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-wbvcr" Jan 31 16:43:27 crc kubenswrapper[4730]: I0131 16:43:27.997124 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" Jan 31 16:43:28 crc kubenswrapper[4730]: I0131 16:43:28.151726 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn\" (UID: \"82fbb691-9ea3-473a-9bd7-22489bcfae0a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" Jan 31 16:43:28 crc kubenswrapper[4730]: I0131 16:43:28.160541 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82fbb691-9ea3-473a-9bd7-22489bcfae0a-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn\" (UID: \"82fbb691-9ea3-473a-9bd7-22489bcfae0a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" Jan 31 16:43:28 crc kubenswrapper[4730]: E0131 16:43:28.247663 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.53:5001/openstack-k8s-operators/swift-operator:b656a0d4c3289dd10bc234fd1e2c36c5db0209c9" Jan 31 16:43:28 crc kubenswrapper[4730]: E0131 16:43:28.247745 4730 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.53:5001/openstack-k8s-operators/swift-operator:b656a0d4c3289dd10bc234fd1e2c36c5db0209c9" Jan 31 16:43:28 crc kubenswrapper[4730]: E0131 16:43:28.247874 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.53:5001/openstack-k8s-operators/swift-operator:b656a0d4c3289dd10bc234fd1e2c36c5db0209c9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bck8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85df8f7b7c-krdxf_openstack-operators(b6911ed2-ca0f-4fed-b5c4-3046ac427b97): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:43:28 crc kubenswrapper[4730]: E0131 16:43:28.249289 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-85df8f7b7c-krdxf" podUID="b6911ed2-ca0f-4fed-b5c4-3046ac427b97" Jan 31 16:43:28 crc kubenswrapper[4730]: I0131 16:43:28.415630 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-dj2j6" Jan 31 16:43:28 crc kubenswrapper[4730]: I0131 16:43:28.424217 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" Jan 31 16:43:28 crc kubenswrapper[4730]: E0131 16:43:28.552243 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.53:5001/openstack-k8s-operators/swift-operator:b656a0d4c3289dd10bc234fd1e2c36c5db0209c9\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85df8f7b7c-krdxf" podUID="b6911ed2-ca0f-4fed-b5c4-3046ac427b97" Jan 31 16:43:28 crc kubenswrapper[4730]: E0131 16:43:28.555332 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-cwqb6" podUID="f87d7bd0-a9ff-48fc-991c-09dd2931d5bd" Jan 31 16:43:28 crc kubenswrapper[4730]: I0131 16:43:28.658053 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:28 crc kubenswrapper[4730]: I0131 16:43:28.658129 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:28 crc kubenswrapper[4730]: E0131 16:43:28.658675 4730 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 16:43:28 crc kubenswrapper[4730]: E0131 16:43:28.658724 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs podName:e76dee4f-067c-436f-85c4-0c538a334973 nodeName:}" failed. No retries permitted until 2026-01-31 16:43:44.658707918 +0000 UTC m=+811.464764824 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs") pod "openstack-operator-controller-manager-5c77fbfdf8-th7sg" (UID: "e76dee4f-067c-436f-85c4-0c538a334973") : secret "metrics-server-cert" not found Jan 31 16:43:28 crc kubenswrapper[4730]: E0131 16:43:28.659949 4730 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 16:43:28 crc kubenswrapper[4730]: E0131 16:43:28.659983 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs podName:e76dee4f-067c-436f-85c4-0c538a334973 nodeName:}" failed. No retries permitted until 2026-01-31 16:43:44.659973904 +0000 UTC m=+811.466030820 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs") pod "openstack-operator-controller-manager-5c77fbfdf8-th7sg" (UID: "e76dee4f-067c-436f-85c4-0c538a334973") : secret "webhook-server-cert" not found Jan 31 16:43:29 crc kubenswrapper[4730]: E0131 16:43:29.439261 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a" Jan 31 16:43:29 crc kubenswrapper[4730]: E0131 16:43:29.439429 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vhccq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-zz8nq_openstack-operators(e96a04a7-bf1d-4a9d-9cc4-5b193c22f7a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:43:29 crc kubenswrapper[4730]: E0131 16:43:29.440648 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zz8nq" podUID="e96a04a7-bf1d-4a9d-9cc4-5b193c22f7a5" Jan 31 16:43:29 crc kubenswrapper[4730]: E0131 16:43:29.560358 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zz8nq" podUID="e96a04a7-bf1d-4a9d-9cc4-5b193c22f7a5" Jan 31 16:43:29 crc kubenswrapper[4730]: E0131 16:43:29.952597 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 31 16:43:29 crc kubenswrapper[4730]: E0131 16:43:29.952905 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nkfkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-4nshr_openstack-operators(3cd35794-6a52-452b-9e7b-d1bb4f828dc1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:43:29 crc kubenswrapper[4730]: E0131 16:43:29.954131 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4nshr" podUID="3cd35794-6a52-452b-9e7b-d1bb4f828dc1" Jan 31 16:43:30 crc kubenswrapper[4730]: E0131 16:43:30.563688 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4nshr" podUID="3cd35794-6a52-452b-9e7b-d1bb4f828dc1" Jan 31 16:43:32 crc kubenswrapper[4730]: E0131 16:43:32.578303 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Jan 31 16:43:32 crc kubenswrapper[4730]: E0131 16:43:32.578880 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fq44q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-vcgsr_openstack-operators(b542fd94-b4bf-44af-8276-7d2e686f5bb4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:43:32 crc kubenswrapper[4730]: E0131 16:43:32.581582 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vcgsr" podUID="b542fd94-b4bf-44af-8276-7d2e686f5bb4" Jan 31 16:43:33 crc kubenswrapper[4730]: E0131 16:43:33.546992 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Jan 31 16:43:33 crc kubenswrapper[4730]: E0131 16:43:33.547213 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hc54j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-v5rrb_openstack-operators(5d112f3e-564e-4003-90fe-6472c5643d40): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:43:33 crc kubenswrapper[4730]: E0131 16:43:33.550286 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-v5rrb" podUID="5d112f3e-564e-4003-90fe-6472c5643d40" Jan 31 16:43:33 crc kubenswrapper[4730]: E0131 16:43:33.589252 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vcgsr" podUID="b542fd94-b4bf-44af-8276-7d2e686f5bb4" Jan 31 16:43:33 crc kubenswrapper[4730]: E0131 16:43:33.589266 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-v5rrb" podUID="5d112f3e-564e-4003-90fe-6472c5643d40" Jan 31 16:43:36 crc kubenswrapper[4730]: E0131 16:43:36.067572 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 31 16:43:36 crc kubenswrapper[4730]: E0131 16:43:36.067720 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jdt25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-dl95k_openstack-operators(4ffdcf38-ba5f-40c9-aef8-945d0c6bfbb4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:43:36 crc kubenswrapper[4730]: E0131 16:43:36.069185 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-dl95k" podUID="4ffdcf38-ba5f-40c9-aef8-945d0c6bfbb4" Jan 31 16:43:36 crc kubenswrapper[4730]: E0131 16:43:36.604673 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-dl95k" podUID="4ffdcf38-ba5f-40c9-aef8-945d0c6bfbb4" Jan 31 16:43:40 crc kubenswrapper[4730]: E0131 16:43:40.450063 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 31 16:43:40 crc kubenswrapper[4730]: E0131 16:43:40.450816 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zkf45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-kdldq_openstack-operators(7befb81f-95d7-4b23-a23d-2255e67528b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:43:40 crc kubenswrapper[4730]: E0131 16:43:40.452071 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kdldq" podUID="7befb81f-95d7-4b23-a23d-2255e67528b0" Jan 31 16:43:40 crc kubenswrapper[4730]: E0131 16:43:40.630792 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kdldq" podUID="7befb81f-95d7-4b23-a23d-2255e67528b0" Jan 31 16:43:42 crc kubenswrapper[4730]: I0131 16:43:42.181418 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-89f56"] Jan 31 16:43:42 crc kubenswrapper[4730]: I0131 16:43:42.457392 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn"] Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.648519 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ktcvd" event={"ID":"1ecbf8bc-da38-4cc2-8d7e-eef855555957","Type":"ContainerStarted","Data":"cf2ceb117500bb2126b83893d41c2ccb976f6f22cfa49966aa2a1ac35e5f97f2"} Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.649092 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ktcvd" Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.654903 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-87zjj" event={"ID":"113a73b1-4239-42e9-a168-704da54b2c56","Type":"ContainerStarted","Data":"6e4f465733518bb60217ead19ba5f10be2d941528f7fd053dda1ee837eb695e7"} Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.655477 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-87zjj" Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.656425 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-j58sp" event={"ID":"ae26b53f-3174-4f96-9bc0-ea8be0ce6b72","Type":"ContainerStarted","Data":"2011020a255d89907b7a31988e37abba3eef6aa783e98b08da7b64bdaa9a4a90"} Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.656762 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-j58sp" Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.657626 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-g28f6" event={"ID":"17116685-ca76-4a23-9b73-04cec9287254","Type":"ContainerStarted","Data":"2da3acabbb241be8edd8e9c1e0897b0d2f609f3fb554d0e55cee9cdf755f84bf"} Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.657958 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-g28f6" Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.664405 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" event={"ID":"82fbb691-9ea3-473a-9bd7-22489bcfae0a","Type":"ContainerStarted","Data":"74eee281deea2f0415a332aa8beed5c6358eec3340f6f1b8ffd67996b908d6c4"} Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.671747 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" event={"ID":"58a9ca1b-4bc7-4912-ae16-3210ecea5790","Type":"ContainerStarted","Data":"05400e44d26b3300225b19c6cc48ad6db56f8f2ff55ad820332e1c195aa86fbf"} Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.676050 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dk9lg" event={"ID":"73250cb3-9b05-4102-b306-6c88d4881a23","Type":"ContainerStarted","Data":"dafa9cee1602a40707c214fe437d2a99bd08484bc657fefd5039c8b02b212a47"} Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.676639 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dk9lg" Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.690658 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bzkp6" event={"ID":"13990a08-64f5-47af-a6fb-59b6b547fe7f","Type":"ContainerStarted","Data":"df096a03c520476c242a6d9d5de213499a0c9f06d1b3ee2a47cf4cd5b674e308"} Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.690850 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bzkp6" Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.695167 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lj76z" event={"ID":"37bb03aa-53be-43db-bcbc-5b0ea10eb72e","Type":"ContainerStarted","Data":"dbbddbbbb3a437d8ed5a5e08f21bba18e8ccffdc5ec21aba4373916e926c02e4"} Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.696655 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hmbg9" event={"ID":"d13ce75a-a1e5-4a49-a46a-514b904c460a","Type":"ContainerStarted","Data":"d207f661de482b65a97ab144bad6d6a378f51d8b8ebdb85386c17f2bc1b4b334"} Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.697003 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hmbg9" Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.697837 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-cwqb6" event={"ID":"f87d7bd0-a9ff-48fc-991c-09dd2931d5bd","Type":"ContainerStarted","Data":"5f5ef69ac94a88182d97881acd98a838e56e61436dcd5912c4d3e6810fe58494"} Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.698145 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-cwqb6" Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.699025 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4x5l9" event={"ID":"0cfec67f-86ec-4246-9eef-53634c164730","Type":"ContainerStarted","Data":"79551aaf4ff79575695e6d3fb29a29331e6f911b7d6f47553f4366bcbc032780"} Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.699330 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4x5l9" Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.700190 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pcvgw" event={"ID":"db806e61-96eb-4f21-9521-85c8cca3dbb6","Type":"ContainerStarted","Data":"2bc600be31c28fc06ed154e084df77b2cef215e4cbd546690e82786f0ade04d9"} Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.700499 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pcvgw" Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.717416 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-cd9vd" event={"ID":"03b55837-5391-4dc0-88de-aa3b0893e733","Type":"ContainerStarted","Data":"299469f05efab3b887d9261d990c645a9b523f5331b0c2662a119c09f04b5fcd"} Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.718149 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-cd9vd" Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.728984 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4nshr" event={"ID":"3cd35794-6a52-452b-9e7b-d1bb4f828dc1","Type":"ContainerStarted","Data":"ed1ccd50725a8038c5b2cefd161c4875bd3583b902ce5fee8313bcbc5899df81"} Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.729557 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4nshr" Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.743001 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-w9r8d" event={"ID":"58bb04d3-9031-43d5-b96f-0874d7ad4f79","Type":"ContainerStarted","Data":"115e0c5581f80f62488b7f3734153dbd23751bdd52e218b55788e56c0eb7c249"} Jan 31 16:43:43 crc kubenswrapper[4730]: I0131 16:43:43.743143 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-w9r8d" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.156281 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pcvgw" podStartSLOduration=10.661700532 podStartE2EDuration="33.156259316s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.5559892 +0000 UTC m=+780.362046116" lastFinishedPulling="2026-01-31 16:43:36.050547984 +0000 UTC m=+802.856604900" observedRunningTime="2026-01-31 16:43:44.066019372 +0000 UTC m=+810.872076288" watchObservedRunningTime="2026-01-31 16:43:44.156259316 +0000 UTC m=+810.962316232" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.260739 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dk9lg" podStartSLOduration=4.551449156 podStartE2EDuration="33.260724051s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.943603758 +0000 UTC m=+780.749660674" lastFinishedPulling="2026-01-31 16:43:42.652878653 +0000 UTC m=+809.458935569" observedRunningTime="2026-01-31 16:43:44.162390159 +0000 UTC m=+810.968447075" watchObservedRunningTime="2026-01-31 16:43:44.260724051 +0000 UTC m=+811.066780967" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.295963 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-cd9vd" podStartSLOduration=10.206692437 podStartE2EDuration="32.295946024s" podCreationTimestamp="2026-01-31 16:43:12 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.963225421 +0000 UTC m=+780.769282337" lastFinishedPulling="2026-01-31 16:43:36.052479008 +0000 UTC m=+802.858535924" observedRunningTime="2026-01-31 16:43:44.263009925 +0000 UTC m=+811.069066831" watchObservedRunningTime="2026-01-31 16:43:44.295946024 +0000 UTC m=+811.102002940" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.359558 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-cwqb6" podStartSLOduration=4.106565514 podStartE2EDuration="33.359542956s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.510285852 +0000 UTC m=+780.316342768" lastFinishedPulling="2026-01-31 16:43:42.763263294 +0000 UTC m=+809.569320210" observedRunningTime="2026-01-31 16:43:44.296358776 +0000 UTC m=+811.102415692" watchObservedRunningTime="2026-01-31 16:43:44.359542956 +0000 UTC m=+811.165599872" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.360163 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bzkp6" podStartSLOduration=9.913503288 podStartE2EDuration="33.360157783s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.102745662 +0000 UTC m=+779.908802578" lastFinishedPulling="2026-01-31 16:43:36.549400157 +0000 UTC m=+803.355457073" observedRunningTime="2026-01-31 16:43:44.343794212 +0000 UTC m=+811.149851128" watchObservedRunningTime="2026-01-31 16:43:44.360157783 +0000 UTC m=+811.166214699" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.415865 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4nshr" podStartSLOduration=4.404470151 podStartE2EDuration="33.415850183s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.766040662 +0000 UTC m=+780.572097578" lastFinishedPulling="2026-01-31 16:43:42.777420694 +0000 UTC m=+809.583477610" observedRunningTime="2026-01-31 16:43:44.413029144 +0000 UTC m=+811.219086060" watchObservedRunningTime="2026-01-31 16:43:44.415850183 +0000 UTC m=+811.221907099" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.444513 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lj76z" podStartSLOduration=3.589856518 podStartE2EDuration="32.444497901s" podCreationTimestamp="2026-01-31 16:43:12 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.938623967 +0000 UTC m=+780.744680883" lastFinishedPulling="2026-01-31 16:43:42.79326535 +0000 UTC m=+809.599322266" observedRunningTime="2026-01-31 16:43:44.442505795 +0000 UTC m=+811.248562711" watchObservedRunningTime="2026-01-31 16:43:44.444497901 +0000 UTC m=+811.250554817" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.511736 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hmbg9" podStartSLOduration=10.579881985 podStartE2EDuration="33.511720116s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.118692852 +0000 UTC m=+779.924749778" lastFinishedPulling="2026-01-31 16:43:36.050530973 +0000 UTC m=+802.856587909" observedRunningTime="2026-01-31 16:43:44.508927897 +0000 UTC m=+811.314984813" watchObservedRunningTime="2026-01-31 16:43:44.511720116 +0000 UTC m=+811.317777032" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.573070 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ktcvd" podStartSLOduration=10.471146799 podStartE2EDuration="33.573056605s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:12.949507412 +0000 UTC m=+779.755564318" lastFinishedPulling="2026-01-31 16:43:36.051417208 +0000 UTC m=+802.857474124" observedRunningTime="2026-01-31 16:43:44.567935111 +0000 UTC m=+811.373992027" watchObservedRunningTime="2026-01-31 16:43:44.573056605 +0000 UTC m=+811.379113521" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.600968 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-87zjj" podStartSLOduration=4.4013872450000004 podStartE2EDuration="33.600950161s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.538552879 +0000 UTC m=+780.344609795" lastFinishedPulling="2026-01-31 16:43:42.738115795 +0000 UTC m=+809.544172711" observedRunningTime="2026-01-31 16:43:44.594930262 +0000 UTC m=+811.400987178" watchObservedRunningTime="2026-01-31 16:43:44.600950161 +0000 UTC m=+811.407007077" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.649866 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-j58sp" podStartSLOduration=4.781663725 podStartE2EDuration="33.64984768s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.775285092 +0000 UTC m=+780.581342008" lastFinishedPulling="2026-01-31 16:43:42.643469047 +0000 UTC m=+809.449525963" observedRunningTime="2026-01-31 16:43:44.644826098 +0000 UTC m=+811.450883014" watchObservedRunningTime="2026-01-31 16:43:44.64984768 +0000 UTC m=+811.455904586" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.651819 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4x5l9" podStartSLOduration=5.434623172 podStartE2EDuration="33.651813545s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.777983748 +0000 UTC m=+780.584040664" lastFinishedPulling="2026-01-31 16:43:41.995174121 +0000 UTC m=+808.801231037" observedRunningTime="2026-01-31 16:43:44.620734969 +0000 UTC m=+811.426791885" watchObservedRunningTime="2026-01-31 16:43:44.651813545 +0000 UTC m=+811.457870461" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.673858 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-g28f6" podStartSLOduration=3.9454201209999997 podStartE2EDuration="32.673844486s" podCreationTimestamp="2026-01-31 16:43:12 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.937441664 +0000 UTC m=+780.743498580" lastFinishedPulling="2026-01-31 16:43:42.665866029 +0000 UTC m=+809.471922945" observedRunningTime="2026-01-31 16:43:44.669272898 +0000 UTC m=+811.475329814" watchObservedRunningTime="2026-01-31 16:43:44.673844486 +0000 UTC m=+811.479901392" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.689313 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-w9r8d" podStartSLOduration=10.957659444 podStartE2EDuration="33.689294702s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.318762372 +0000 UTC m=+780.124819288" lastFinishedPulling="2026-01-31 16:43:36.05039763 +0000 UTC m=+802.856454546" observedRunningTime="2026-01-31 16:43:44.687032998 +0000 UTC m=+811.493089914" watchObservedRunningTime="2026-01-31 16:43:44.689294702 +0000 UTC m=+811.495351618" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.718838 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.718967 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.769506 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-webhook-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.782123 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e76dee4f-067c-436f-85c4-0c538a334973-metrics-certs\") pod \"openstack-operator-controller-manager-5c77fbfdf8-th7sg\" (UID: \"e76dee4f-067c-436f-85c4-0c538a334973\") " pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.841070 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qjfmc" Jan 31 16:43:44 crc kubenswrapper[4730]: I0131 16:43:44.849896 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:45 crc kubenswrapper[4730]: I0131 16:43:45.430463 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg"] Jan 31 16:43:45 crc kubenswrapper[4730]: W0131 16:43:45.443321 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode76dee4f_067c_436f_85c4_0c538a334973.slice/crio-4f0dba23005cdd8f45a6a76f4f92f222cef7fab69ca7e739382c68da04dcbc54 WatchSource:0}: Error finding container 4f0dba23005cdd8f45a6a76f4f92f222cef7fab69ca7e739382c68da04dcbc54: Status 404 returned error can't find the container with id 4f0dba23005cdd8f45a6a76f4f92f222cef7fab69ca7e739382c68da04dcbc54 Jan 31 16:43:45 crc kubenswrapper[4730]: I0131 16:43:45.862532 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85df8f7b7c-krdxf" event={"ID":"b6911ed2-ca0f-4fed-b5c4-3046ac427b97","Type":"ContainerStarted","Data":"ded6368577660a516bce713f6a7360c9e33ab12e6cb673fad7c02293a14d6db6"} Jan 31 16:43:45 crc kubenswrapper[4730]: I0131 16:43:45.863486 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85df8f7b7c-krdxf" Jan 31 16:43:45 crc kubenswrapper[4730]: I0131 16:43:45.865581 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" event={"ID":"e76dee4f-067c-436f-85c4-0c538a334973","Type":"ContainerStarted","Data":"6de535388038edb3abda75adbcd4e4ecd74a194d475b847c3cf663804ee3d045"} Jan 31 16:43:45 crc kubenswrapper[4730]: I0131 16:43:45.865639 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" event={"ID":"e76dee4f-067c-436f-85c4-0c538a334973","Type":"ContainerStarted","Data":"4f0dba23005cdd8f45a6a76f4f92f222cef7fab69ca7e739382c68da04dcbc54"} Jan 31 16:43:45 crc kubenswrapper[4730]: I0131 16:43:45.866358 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:45 crc kubenswrapper[4730]: I0131 16:43:45.868864 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zz8nq" event={"ID":"e96a04a7-bf1d-4a9d-9cc4-5b193c22f7a5","Type":"ContainerStarted","Data":"d5e234d2892e566e873bf97d4dcce30c541a91460f26cd0aa5b6586a160d36de"} Jan 31 16:43:45 crc kubenswrapper[4730]: I0131 16:43:45.869219 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zz8nq" Jan 31 16:43:45 crc kubenswrapper[4730]: I0131 16:43:45.889739 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85df8f7b7c-krdxf" podStartSLOduration=3.708588563 podStartE2EDuration="34.889721154s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.775063876 +0000 UTC m=+780.581120792" lastFinishedPulling="2026-01-31 16:43:44.956196467 +0000 UTC m=+811.762253383" observedRunningTime="2026-01-31 16:43:45.88816208 +0000 UTC m=+812.694218996" watchObservedRunningTime="2026-01-31 16:43:45.889721154 +0000 UTC m=+812.695778070" Jan 31 16:43:45 crc kubenswrapper[4730]: I0131 16:43:45.916904 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zz8nq" podStartSLOduration=2.629104883 podStartE2EDuration="33.91688551s" podCreationTimestamp="2026-01-31 16:43:12 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.748115536 +0000 UTC m=+780.554172452" lastFinishedPulling="2026-01-31 16:43:45.035896163 +0000 UTC m=+811.841953079" observedRunningTime="2026-01-31 16:43:45.913027181 +0000 UTC m=+812.719084097" watchObservedRunningTime="2026-01-31 16:43:45.91688551 +0000 UTC m=+812.722942426" Jan 31 16:43:45 crc kubenswrapper[4730]: I0131 16:43:45.940555 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" podStartSLOduration=33.940541327 podStartE2EDuration="33.940541327s" podCreationTimestamp="2026-01-31 16:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:43:45.938661354 +0000 UTC m=+812.744718270" watchObservedRunningTime="2026-01-31 16:43:45.940541327 +0000 UTC m=+812.746598243" Jan 31 16:43:46 crc kubenswrapper[4730]: I0131 16:43:46.877785 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vcgsr" event={"ID":"b542fd94-b4bf-44af-8276-7d2e686f5bb4","Type":"ContainerStarted","Data":"94ffa0588e0d5012d7d05439cf0a7534aba42fb622e135848e834ba8ad800102"} Jan 31 16:43:46 crc kubenswrapper[4730]: I0131 16:43:46.879167 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vcgsr" Jan 31 16:43:46 crc kubenswrapper[4730]: I0131 16:43:46.890994 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vcgsr" podStartSLOduration=3.395176718 podStartE2EDuration="35.890984172s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.544085204 +0000 UTC m=+780.350142120" lastFinishedPulling="2026-01-31 16:43:46.039892658 +0000 UTC m=+812.845949574" observedRunningTime="2026-01-31 16:43:46.88984603 +0000 UTC m=+813.695902936" watchObservedRunningTime="2026-01-31 16:43:46.890984172 +0000 UTC m=+813.697041088" Jan 31 16:43:48 crc kubenswrapper[4730]: I0131 16:43:48.890898 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-v5rrb" event={"ID":"5d112f3e-564e-4003-90fe-6472c5643d40","Type":"ContainerStarted","Data":"ee010799e84c0c49bc787789ce1a3da609cde959314d18a2e67596cb51153a0e"} Jan 31 16:43:48 crc kubenswrapper[4730]: I0131 16:43:48.891345 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-v5rrb" Jan 31 16:43:48 crc kubenswrapper[4730]: I0131 16:43:48.893165 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" event={"ID":"82fbb691-9ea3-473a-9bd7-22489bcfae0a","Type":"ContainerStarted","Data":"f26d19987c84d72369bfc987812630817ffbdbac5a79be0a0989e41fb2ea551f"} Jan 31 16:43:48 crc kubenswrapper[4730]: I0131 16:43:48.893667 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" Jan 31 16:43:48 crc kubenswrapper[4730]: I0131 16:43:48.895271 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" event={"ID":"58a9ca1b-4bc7-4912-ae16-3210ecea5790","Type":"ContainerStarted","Data":"26316cb19ffa956712061a100313dd1015e05f0110398bd4cababe3e9c2b404b"} Jan 31 16:43:48 crc kubenswrapper[4730]: I0131 16:43:48.895380 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" Jan 31 16:43:48 crc kubenswrapper[4730]: I0131 16:43:48.941338 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-v5rrb" podStartSLOduration=3.100345817 podStartE2EDuration="37.941318963s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.560866188 +0000 UTC m=+780.366923104" lastFinishedPulling="2026-01-31 16:43:48.401839304 +0000 UTC m=+815.207896250" observedRunningTime="2026-01-31 16:43:48.933935785 +0000 UTC m=+815.739992701" watchObservedRunningTime="2026-01-31 16:43:48.941318963 +0000 UTC m=+815.747375879" Jan 31 16:43:48 crc kubenswrapper[4730]: I0131 16:43:48.962181 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" podStartSLOduration=32.216015238 podStartE2EDuration="37.96216603s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:42.65809571 +0000 UTC m=+809.464152616" lastFinishedPulling="2026-01-31 16:43:48.404246482 +0000 UTC m=+815.210303408" observedRunningTime="2026-01-31 16:43:48.956275984 +0000 UTC m=+815.762332900" watchObservedRunningTime="2026-01-31 16:43:48.96216603 +0000 UTC m=+815.768222936" Jan 31 16:43:48 crc kubenswrapper[4730]: I0131 16:43:48.982744 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" podStartSLOduration=32.255054899 podStartE2EDuration="37.98272567s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:42.665354844 +0000 UTC m=+809.471411760" lastFinishedPulling="2026-01-31 16:43:48.393025595 +0000 UTC m=+815.199082531" observedRunningTime="2026-01-31 16:43:48.97881901 +0000 UTC m=+815.784875916" watchObservedRunningTime="2026-01-31 16:43:48.98272567 +0000 UTC m=+815.788782576" Jan 31 16:43:49 crc kubenswrapper[4730]: I0131 16:43:49.901986 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-dl95k" event={"ID":"4ffdcf38-ba5f-40c9-aef8-945d0c6bfbb4","Type":"ContainerStarted","Data":"bf6b0ca68904374aee06540a998837c3d13240d39db3059faf42c5ad3ce97ceb"} Jan 31 16:43:49 crc kubenswrapper[4730]: I0131 16:43:49.902448 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-dl95k" Jan 31 16:43:49 crc kubenswrapper[4730]: I0131 16:43:49.925704 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-dl95k" podStartSLOduration=3.585920456 podStartE2EDuration="38.925689824s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.553987364 +0000 UTC m=+780.360044280" lastFinishedPulling="2026-01-31 16:43:48.893756732 +0000 UTC m=+815.699813648" observedRunningTime="2026-01-31 16:43:49.921891297 +0000 UTC m=+816.727948213" watchObservedRunningTime="2026-01-31 16:43:49.925689824 +0000 UTC m=+816.731746740" Jan 31 16:43:51 crc kubenswrapper[4730]: I0131 16:43:51.875428 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ktcvd" Jan 31 16:43:51 crc kubenswrapper[4730]: I0131 16:43:51.906159 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bzkp6" Jan 31 16:43:51 crc kubenswrapper[4730]: I0131 16:43:51.986900 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hmbg9" Jan 31 16:43:52 crc kubenswrapper[4730]: I0131 16:43:52.026439 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-w9r8d" Jan 31 16:43:52 crc kubenswrapper[4730]: I0131 16:43:52.173619 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-vcgsr" Jan 31 16:43:52 crc kubenswrapper[4730]: I0131 16:43:52.319053 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pcvgw" Jan 31 16:43:52 crc kubenswrapper[4730]: I0131 16:43:52.321108 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-cwqb6" Jan 31 16:43:52 crc kubenswrapper[4730]: I0131 16:43:52.423169 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-87zjj" Jan 31 16:43:52 crc kubenswrapper[4730]: I0131 16:43:52.482786 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-4nshr" Jan 31 16:43:52 crc kubenswrapper[4730]: I0131 16:43:52.617232 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4x5l9" Jan 31 16:43:52 crc kubenswrapper[4730]: I0131 16:43:52.705412 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-j58sp" Jan 31 16:43:52 crc kubenswrapper[4730]: I0131 16:43:52.747678 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dk9lg" Jan 31 16:43:52 crc kubenswrapper[4730]: I0131 16:43:52.789594 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85df8f7b7c-krdxf" Jan 31 16:43:52 crc kubenswrapper[4730]: I0131 16:43:52.844439 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zz8nq" Jan 31 16:43:53 crc kubenswrapper[4730]: I0131 16:43:53.010030 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-g28f6" Jan 31 16:43:53 crc kubenswrapper[4730]: I0131 16:43:53.170883 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-cd9vd" Jan 31 16:43:53 crc kubenswrapper[4730]: I0131 16:43:53.956997 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kdldq" event={"ID":"7befb81f-95d7-4b23-a23d-2255e67528b0","Type":"ContainerStarted","Data":"355ee8e8c84fc17c1545304520ab8d0290a7babf192af6c21a584c4eb687adaa"} Jan 31 16:43:53 crc kubenswrapper[4730]: I0131 16:43:53.958416 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kdldq" Jan 31 16:43:53 crc kubenswrapper[4730]: I0131 16:43:53.985765 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kdldq" podStartSLOduration=3.869522131 podStartE2EDuration="42.985739983s" podCreationTimestamp="2026-01-31 16:43:11 +0000 UTC" firstStartedPulling="2026-01-31 16:43:13.771791854 +0000 UTC m=+780.577848760" lastFinishedPulling="2026-01-31 16:43:52.888009696 +0000 UTC m=+819.694066612" observedRunningTime="2026-01-31 16:43:53.977725387 +0000 UTC m=+820.783782333" watchObservedRunningTime="2026-01-31 16:43:53.985739983 +0000 UTC m=+820.791796929" Jan 31 16:43:54 crc kubenswrapper[4730]: I0131 16:43:54.856751 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5c77fbfdf8-th7sg" Jan 31 16:43:56 crc kubenswrapper[4730]: I0131 16:43:56.975924 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:43:56 crc kubenswrapper[4730]: I0131 16:43:56.976368 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:43:58 crc kubenswrapper[4730]: I0131 16:43:58.004253 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-89f56" Jan 31 16:43:58 crc kubenswrapper[4730]: I0131 16:43:58.432317 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn" Jan 31 16:44:02 crc kubenswrapper[4730]: I0131 16:44:02.232194 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-v5rrb" Jan 31 16:44:02 crc kubenswrapper[4730]: I0131 16:44:02.459505 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-dl95k" Jan 31 16:44:02 crc kubenswrapper[4730]: I0131 16:44:02.596403 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kdldq" Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.700692 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-w2p7z"] Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.703556 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-w2p7z" Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.706417 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.706877 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.707299 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-wvzm6" Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.707420 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.720649 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-w2p7z"] Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.768654 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7z7r"] Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.769816 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-c7z7r" Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.772376 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.798569 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7z7r"] Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.826233 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3474afd1-6842-4df8-a16f-cfb2070714ab-config\") pod \"dnsmasq-dns-675f4bcbfc-w2p7z\" (UID: \"3474afd1-6842-4df8-a16f-cfb2070714ab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-w2p7z" Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.826384 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8sml\" (UniqueName: \"kubernetes.io/projected/3474afd1-6842-4df8-a16f-cfb2070714ab-kube-api-access-x8sml\") pod \"dnsmasq-dns-675f4bcbfc-w2p7z\" (UID: \"3474afd1-6842-4df8-a16f-cfb2070714ab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-w2p7z" Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.927875 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bffmv\" (UniqueName: \"kubernetes.io/projected/f6313619-1a77-4a9b-bc70-177cd533b738-kube-api-access-bffmv\") pod \"dnsmasq-dns-78dd6ddcc-c7z7r\" (UID: \"f6313619-1a77-4a9b-bc70-177cd533b738\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7z7r" Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.927947 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6313619-1a77-4a9b-bc70-177cd533b738-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-c7z7r\" (UID: \"f6313619-1a77-4a9b-bc70-177cd533b738\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7z7r" Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.927985 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6313619-1a77-4a9b-bc70-177cd533b738-config\") pod \"dnsmasq-dns-78dd6ddcc-c7z7r\" (UID: \"f6313619-1a77-4a9b-bc70-177cd533b738\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7z7r" Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.928011 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3474afd1-6842-4df8-a16f-cfb2070714ab-config\") pod \"dnsmasq-dns-675f4bcbfc-w2p7z\" (UID: \"3474afd1-6842-4df8-a16f-cfb2070714ab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-w2p7z" Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.928138 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8sml\" (UniqueName: \"kubernetes.io/projected/3474afd1-6842-4df8-a16f-cfb2070714ab-kube-api-access-x8sml\") pod \"dnsmasq-dns-675f4bcbfc-w2p7z\" (UID: \"3474afd1-6842-4df8-a16f-cfb2070714ab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-w2p7z" Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.928953 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3474afd1-6842-4df8-a16f-cfb2070714ab-config\") pod \"dnsmasq-dns-675f4bcbfc-w2p7z\" (UID: \"3474afd1-6842-4df8-a16f-cfb2070714ab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-w2p7z" Jan 31 16:44:19 crc kubenswrapper[4730]: I0131 16:44:19.953490 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8sml\" (UniqueName: \"kubernetes.io/projected/3474afd1-6842-4df8-a16f-cfb2070714ab-kube-api-access-x8sml\") pod \"dnsmasq-dns-675f4bcbfc-w2p7z\" (UID: \"3474afd1-6842-4df8-a16f-cfb2070714ab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-w2p7z" Jan 31 16:44:20 crc kubenswrapper[4730]: I0131 16:44:20.028900 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-w2p7z" Jan 31 16:44:20 crc kubenswrapper[4730]: I0131 16:44:20.031195 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6313619-1a77-4a9b-bc70-177cd533b738-config\") pod \"dnsmasq-dns-78dd6ddcc-c7z7r\" (UID: \"f6313619-1a77-4a9b-bc70-177cd533b738\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7z7r" Jan 31 16:44:20 crc kubenswrapper[4730]: I0131 16:44:20.031428 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bffmv\" (UniqueName: \"kubernetes.io/projected/f6313619-1a77-4a9b-bc70-177cd533b738-kube-api-access-bffmv\") pod \"dnsmasq-dns-78dd6ddcc-c7z7r\" (UID: \"f6313619-1a77-4a9b-bc70-177cd533b738\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7z7r" Jan 31 16:44:20 crc kubenswrapper[4730]: I0131 16:44:20.031456 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6313619-1a77-4a9b-bc70-177cd533b738-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-c7z7r\" (UID: \"f6313619-1a77-4a9b-bc70-177cd533b738\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7z7r" Jan 31 16:44:20 crc kubenswrapper[4730]: I0131 16:44:20.032057 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6313619-1a77-4a9b-bc70-177cd533b738-config\") pod \"dnsmasq-dns-78dd6ddcc-c7z7r\" (UID: \"f6313619-1a77-4a9b-bc70-177cd533b738\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7z7r" Jan 31 16:44:20 crc kubenswrapper[4730]: I0131 16:44:20.032320 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6313619-1a77-4a9b-bc70-177cd533b738-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-c7z7r\" (UID: \"f6313619-1a77-4a9b-bc70-177cd533b738\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7z7r" Jan 31 16:44:20 crc kubenswrapper[4730]: I0131 16:44:20.052524 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bffmv\" (UniqueName: \"kubernetes.io/projected/f6313619-1a77-4a9b-bc70-177cd533b738-kube-api-access-bffmv\") pod \"dnsmasq-dns-78dd6ddcc-c7z7r\" (UID: \"f6313619-1a77-4a9b-bc70-177cd533b738\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7z7r" Jan 31 16:44:20 crc kubenswrapper[4730]: I0131 16:44:20.088103 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-c7z7r" Jan 31 16:44:20 crc kubenswrapper[4730]: I0131 16:44:20.498004 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-w2p7z"] Jan 31 16:44:20 crc kubenswrapper[4730]: I0131 16:44:20.552215 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7z7r"] Jan 31 16:44:20 crc kubenswrapper[4730]: W0131 16:44:20.561931 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6313619_1a77_4a9b_bc70_177cd533b738.slice/crio-99618bcdb0b31619b2aa3fdb320785e1336ea382d2888e871857a2fb7a490426 WatchSource:0}: Error finding container 99618bcdb0b31619b2aa3fdb320785e1336ea382d2888e871857a2fb7a490426: Status 404 returned error can't find the container with id 99618bcdb0b31619b2aa3fdb320785e1336ea382d2888e871857a2fb7a490426 Jan 31 16:44:21 crc kubenswrapper[4730]: I0131 16:44:21.187400 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-c7z7r" event={"ID":"f6313619-1a77-4a9b-bc70-177cd533b738","Type":"ContainerStarted","Data":"99618bcdb0b31619b2aa3fdb320785e1336ea382d2888e871857a2fb7a490426"} Jan 31 16:44:21 crc kubenswrapper[4730]: I0131 16:44:21.188649 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-w2p7z" event={"ID":"3474afd1-6842-4df8-a16f-cfb2070714ab","Type":"ContainerStarted","Data":"8acd50c2bc94abb5bf3f77138fdf08d10e42a902056d81e503b8c525bcefe7b0"} Jan 31 16:44:22 crc kubenswrapper[4730]: I0131 16:44:22.691046 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-w2p7z"] Jan 31 16:44:22 crc kubenswrapper[4730]: I0131 16:44:22.709475 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5mhxq"] Jan 31 16:44:22 crc kubenswrapper[4730]: I0131 16:44:22.710541 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" Jan 31 16:44:22 crc kubenswrapper[4730]: I0131 16:44:22.720007 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5mhxq"] Jan 31 16:44:22 crc kubenswrapper[4730]: I0131 16:44:22.895196 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxd6g\" (UniqueName: \"kubernetes.io/projected/c6c05f77-50d7-4933-aca0-45a255bbd253-kube-api-access-cxd6g\") pod \"dnsmasq-dns-666b6646f7-5mhxq\" (UID: \"c6c05f77-50d7-4933-aca0-45a255bbd253\") " pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" Jan 31 16:44:22 crc kubenswrapper[4730]: I0131 16:44:22.895247 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6c05f77-50d7-4933-aca0-45a255bbd253-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5mhxq\" (UID: \"c6c05f77-50d7-4933-aca0-45a255bbd253\") " pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" Jan 31 16:44:22 crc kubenswrapper[4730]: I0131 16:44:22.895304 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c05f77-50d7-4933-aca0-45a255bbd253-config\") pod \"dnsmasq-dns-666b6646f7-5mhxq\" (UID: \"c6c05f77-50d7-4933-aca0-45a255bbd253\") " pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" Jan 31 16:44:22 crc kubenswrapper[4730]: I0131 16:44:22.996039 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxd6g\" (UniqueName: \"kubernetes.io/projected/c6c05f77-50d7-4933-aca0-45a255bbd253-kube-api-access-cxd6g\") pod \"dnsmasq-dns-666b6646f7-5mhxq\" (UID: \"c6c05f77-50d7-4933-aca0-45a255bbd253\") " pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" Jan 31 16:44:22 crc kubenswrapper[4730]: I0131 16:44:22.996365 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6c05f77-50d7-4933-aca0-45a255bbd253-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5mhxq\" (UID: \"c6c05f77-50d7-4933-aca0-45a255bbd253\") " pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" Jan 31 16:44:22 crc kubenswrapper[4730]: I0131 16:44:22.996437 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c05f77-50d7-4933-aca0-45a255bbd253-config\") pod \"dnsmasq-dns-666b6646f7-5mhxq\" (UID: \"c6c05f77-50d7-4933-aca0-45a255bbd253\") " pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" Jan 31 16:44:22 crc kubenswrapper[4730]: I0131 16:44:22.997274 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6c05f77-50d7-4933-aca0-45a255bbd253-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5mhxq\" (UID: \"c6c05f77-50d7-4933-aca0-45a255bbd253\") " pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" Jan 31 16:44:22 crc kubenswrapper[4730]: I0131 16:44:22.997309 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c05f77-50d7-4933-aca0-45a255bbd253-config\") pod \"dnsmasq-dns-666b6646f7-5mhxq\" (UID: \"c6c05f77-50d7-4933-aca0-45a255bbd253\") " pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.011997 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7z7r"] Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.024918 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxd6g\" (UniqueName: \"kubernetes.io/projected/c6c05f77-50d7-4933-aca0-45a255bbd253-kube-api-access-cxd6g\") pod \"dnsmasq-dns-666b6646f7-5mhxq\" (UID: \"c6c05f77-50d7-4933-aca0-45a255bbd253\") " pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.035947 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8ws2c"] Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.036941 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.044246 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.059654 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8ws2c"] Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.199938 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98fb2db2-b72a-4c8a-94ba-08e1567ba221-config\") pod \"dnsmasq-dns-57d769cc4f-8ws2c\" (UID: \"98fb2db2-b72a-4c8a-94ba-08e1567ba221\") " pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.199993 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98fb2db2-b72a-4c8a-94ba-08e1567ba221-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8ws2c\" (UID: \"98fb2db2-b72a-4c8a-94ba-08e1567ba221\") " pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.200038 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh7gw\" (UniqueName: \"kubernetes.io/projected/98fb2db2-b72a-4c8a-94ba-08e1567ba221-kube-api-access-xh7gw\") pod \"dnsmasq-dns-57d769cc4f-8ws2c\" (UID: \"98fb2db2-b72a-4c8a-94ba-08e1567ba221\") " pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.300696 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh7gw\" (UniqueName: \"kubernetes.io/projected/98fb2db2-b72a-4c8a-94ba-08e1567ba221-kube-api-access-xh7gw\") pod \"dnsmasq-dns-57d769cc4f-8ws2c\" (UID: \"98fb2db2-b72a-4c8a-94ba-08e1567ba221\") " pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.301042 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98fb2db2-b72a-4c8a-94ba-08e1567ba221-config\") pod \"dnsmasq-dns-57d769cc4f-8ws2c\" (UID: \"98fb2db2-b72a-4c8a-94ba-08e1567ba221\") " pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.301066 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98fb2db2-b72a-4c8a-94ba-08e1567ba221-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8ws2c\" (UID: \"98fb2db2-b72a-4c8a-94ba-08e1567ba221\") " pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.302551 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98fb2db2-b72a-4c8a-94ba-08e1567ba221-config\") pod \"dnsmasq-dns-57d769cc4f-8ws2c\" (UID: \"98fb2db2-b72a-4c8a-94ba-08e1567ba221\") " pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.302669 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98fb2db2-b72a-4c8a-94ba-08e1567ba221-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8ws2c\" (UID: \"98fb2db2-b72a-4c8a-94ba-08e1567ba221\") " pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.317673 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh7gw\" (UniqueName: \"kubernetes.io/projected/98fb2db2-b72a-4c8a-94ba-08e1567ba221-kube-api-access-xh7gw\") pod \"dnsmasq-dns-57d769cc4f-8ws2c\" (UID: \"98fb2db2-b72a-4c8a-94ba-08e1567ba221\") " pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.362450 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.539942 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5mhxq"] Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.851585 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8ws2c"] Jan 31 16:44:23 crc kubenswrapper[4730]: W0131 16:44:23.859126 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98fb2db2_b72a_4c8a_94ba_08e1567ba221.slice/crio-3f32b065073d220ef58bf584ffc8856d5ffc64f14f44ceb965ab3d9397a51023 WatchSource:0}: Error finding container 3f32b065073d220ef58bf584ffc8856d5ffc64f14f44ceb965ab3d9397a51023: Status 404 returned error can't find the container with id 3f32b065073d220ef58bf584ffc8856d5ffc64f14f44ceb965ab3d9397a51023 Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.876008 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.877198 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.882468 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.883038 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.883236 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-8wttf" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.883256 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.883569 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.883652 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.885688 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 31 16:44:23 crc kubenswrapper[4730]: I0131 16:44:23.886550 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.027274 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.027335 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.027354 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.027393 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.027411 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr4hm\" (UniqueName: \"kubernetes.io/projected/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-kube-api-access-rr4hm\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.027456 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.027484 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.027502 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.027528 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.027555 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.027581 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-config-data\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.129084 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.129131 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.129155 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.129189 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.129207 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr4hm\" (UniqueName: \"kubernetes.io/projected/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-kube-api-access-rr4hm\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.129259 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.129286 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.129301 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.129324 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.129350 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.129376 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-config-data\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.130718 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.130786 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.131390 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.131679 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.131854 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.132424 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-config-data\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.138663 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.139139 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.139218 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.140838 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.148581 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr4hm\" (UniqueName: \"kubernetes.io/projected/3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda-kube-api-access-rr4hm\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.181747 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda\") " pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.206082 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.218972 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.221560 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.232633 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.233020 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.233099 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.233782 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2x2fl" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.260246 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.260346 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.272119 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.274514 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.299221 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" event={"ID":"c6c05f77-50d7-4933-aca0-45a255bbd253","Type":"ContainerStarted","Data":"c18119f95b4a1096048dfe3f13b7497f9df26cfaf4e93bb1f9ad37fddd8d14b8"} Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.314621 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" event={"ID":"98fb2db2-b72a-4c8a-94ba-08e1567ba221","Type":"ContainerStarted","Data":"3f32b065073d220ef58bf584ffc8856d5ffc64f14f44ceb965ab3d9397a51023"} Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.359034 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/696f3c30-383d-4a98-ab73-bd90571c8fac-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.359215 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.359288 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/696f3c30-383d-4a98-ab73-bd90571c8fac-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.359364 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/696f3c30-383d-4a98-ab73-bd90571c8fac-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.359488 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/696f3c30-383d-4a98-ab73-bd90571c8fac-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.359572 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/696f3c30-383d-4a98-ab73-bd90571c8fac-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.359650 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/696f3c30-383d-4a98-ab73-bd90571c8fac-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.359752 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9pcl\" (UniqueName: \"kubernetes.io/projected/696f3c30-383d-4a98-ab73-bd90571c8fac-kube-api-access-s9pcl\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.359859 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/696f3c30-383d-4a98-ab73-bd90571c8fac-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.359941 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/696f3c30-383d-4a98-ab73-bd90571c8fac-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.360023 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/696f3c30-383d-4a98-ab73-bd90571c8fac-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.471646 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/696f3c30-383d-4a98-ab73-bd90571c8fac-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.471689 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/696f3c30-383d-4a98-ab73-bd90571c8fac-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.471711 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/696f3c30-383d-4a98-ab73-bd90571c8fac-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.471735 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/696f3c30-383d-4a98-ab73-bd90571c8fac-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.471758 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.471774 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/696f3c30-383d-4a98-ab73-bd90571c8fac-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.471795 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/696f3c30-383d-4a98-ab73-bd90571c8fac-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.471843 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/696f3c30-383d-4a98-ab73-bd90571c8fac-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.471860 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/696f3c30-383d-4a98-ab73-bd90571c8fac-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.471879 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/696f3c30-383d-4a98-ab73-bd90571c8fac-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.471906 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9pcl\" (UniqueName: \"kubernetes.io/projected/696f3c30-383d-4a98-ab73-bd90571c8fac-kube-api-access-s9pcl\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.473618 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/696f3c30-383d-4a98-ab73-bd90571c8fac-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.477464 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/696f3c30-383d-4a98-ab73-bd90571c8fac-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.478076 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.478790 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/696f3c30-383d-4a98-ab73-bd90571c8fac-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.482448 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/696f3c30-383d-4a98-ab73-bd90571c8fac-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.482745 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/696f3c30-383d-4a98-ab73-bd90571c8fac-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.492280 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/696f3c30-383d-4a98-ab73-bd90571c8fac-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.505144 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/696f3c30-383d-4a98-ab73-bd90571c8fac-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.533922 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9pcl\" (UniqueName: \"kubernetes.io/projected/696f3c30-383d-4a98-ab73-bd90571c8fac-kube-api-access-s9pcl\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.534849 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/696f3c30-383d-4a98-ab73-bd90571c8fac-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.535488 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/696f3c30-383d-4a98-ab73-bd90571c8fac-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.571142 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"696f3c30-383d-4a98-ab73-bd90571c8fac\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.581930 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:44:24 crc kubenswrapper[4730]: I0131 16:44:24.794532 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.354054 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.355903 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.362205 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.362228 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.362780 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.364163 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.364497 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-c9fxb" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.364971 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.488931 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f96d233a-2c8a-4873-b53b-eb8c3e792160-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.489012 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f96d233a-2c8a-4873-b53b-eb8c3e792160-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.489034 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f96d233a-2c8a-4873-b53b-eb8c3e792160-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.489065 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.489084 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f96d233a-2c8a-4873-b53b-eb8c3e792160-kolla-config\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.489108 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f96d233a-2c8a-4873-b53b-eb8c3e792160-config-data-default\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.489126 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f96d233a-2c8a-4873-b53b-eb8c3e792160-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.489158 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psxxc\" (UniqueName: \"kubernetes.io/projected/f96d233a-2c8a-4873-b53b-eb8c3e792160-kube-api-access-psxxc\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.590624 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f96d233a-2c8a-4873-b53b-eb8c3e792160-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.590676 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f96d233a-2c8a-4873-b53b-eb8c3e792160-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.590710 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.590725 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f96d233a-2c8a-4873-b53b-eb8c3e792160-kolla-config\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.590748 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f96d233a-2c8a-4873-b53b-eb8c3e792160-config-data-default\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.590762 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f96d233a-2c8a-4873-b53b-eb8c3e792160-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.590815 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psxxc\" (UniqueName: \"kubernetes.io/projected/f96d233a-2c8a-4873-b53b-eb8c3e792160-kube-api-access-psxxc\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.590849 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f96d233a-2c8a-4873-b53b-eb8c3e792160-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.595765 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f96d233a-2c8a-4873-b53b-eb8c3e792160-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.595890 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f96d233a-2c8a-4873-b53b-eb8c3e792160-config-data-default\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.596235 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.596665 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f96d233a-2c8a-4873-b53b-eb8c3e792160-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.597337 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f96d233a-2c8a-4873-b53b-eb8c3e792160-kolla-config\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.613393 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psxxc\" (UniqueName: \"kubernetes.io/projected/f96d233a-2c8a-4873-b53b-eb8c3e792160-kube-api-access-psxxc\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.613557 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f96d233a-2c8a-4873-b53b-eb8c3e792160-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.621792 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.625247 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f96d233a-2c8a-4873-b53b-eb8c3e792160-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f96d233a-2c8a-4873-b53b-eb8c3e792160\") " pod="openstack/openstack-galera-0" Jan 31 16:44:25 crc kubenswrapper[4730]: I0131 16:44:25.678251 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.820505 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.821862 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.827457 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.827653 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-h8mlb" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.828263 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.828446 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.842600 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.921476 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/532c157a-5c9c-4043-a85a-5075e5ed9db5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.921517 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/532c157a-5c9c-4043-a85a-5075e5ed9db5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.921541 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggtjc\" (UniqueName: \"kubernetes.io/projected/532c157a-5c9c-4043-a85a-5075e5ed9db5-kube-api-access-ggtjc\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.921568 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/532c157a-5c9c-4043-a85a-5075e5ed9db5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.921591 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/532c157a-5c9c-4043-a85a-5075e5ed9db5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.921722 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.921911 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/532c157a-5c9c-4043-a85a-5075e5ed9db5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.922039 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532c157a-5c9c-4043-a85a-5075e5ed9db5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.974523 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.974576 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.974619 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.975230 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d31bd001ee74e3469a2749b923f42adb83a31cb422ef5d9b45febe42584ea0e1"} pod="openshift-machine-config-operator/machine-config-daemon-mzg47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.975284 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" containerID="cri-o://d31bd001ee74e3469a2749b923f42adb83a31cb422ef5d9b45febe42584ea0e1" gracePeriod=600 Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.979956 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.980858 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.989266 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-5flzc" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.989298 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 31 16:44:26 crc kubenswrapper[4730]: I0131 16:44:26.989499 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.001284 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.024141 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/532c157a-5c9c-4043-a85a-5075e5ed9db5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.024208 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532c157a-5c9c-4043-a85a-5075e5ed9db5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.024232 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/532c157a-5c9c-4043-a85a-5075e5ed9db5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.024249 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/532c157a-5c9c-4043-a85a-5075e5ed9db5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.024264 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggtjc\" (UniqueName: \"kubernetes.io/projected/532c157a-5c9c-4043-a85a-5075e5ed9db5-kube-api-access-ggtjc\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.024287 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/532c157a-5c9c-4043-a85a-5075e5ed9db5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.024305 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/532c157a-5c9c-4043-a85a-5075e5ed9db5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.024333 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.024548 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.026712 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/532c157a-5c9c-4043-a85a-5075e5ed9db5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.027110 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/532c157a-5c9c-4043-a85a-5075e5ed9db5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.027366 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/532c157a-5c9c-4043-a85a-5075e5ed9db5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.027573 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/532c157a-5c9c-4043-a85a-5075e5ed9db5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.043630 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggtjc\" (UniqueName: \"kubernetes.io/projected/532c157a-5c9c-4043-a85a-5075e5ed9db5-kube-api-access-ggtjc\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.048647 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/532c157a-5c9c-4043-a85a-5075e5ed9db5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.062578 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532c157a-5c9c-4043-a85a-5075e5ed9db5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.068754 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"532c157a-5c9c-4043-a85a-5075e5ed9db5\") " pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.125275 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nnbm\" (UniqueName: \"kubernetes.io/projected/e3fc84d7-b01c-4396-89e2-54684791a14d-kube-api-access-7nnbm\") pod \"memcached-0\" (UID: \"e3fc84d7-b01c-4396-89e2-54684791a14d\") " pod="openstack/memcached-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.125315 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fc84d7-b01c-4396-89e2-54684791a14d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e3fc84d7-b01c-4396-89e2-54684791a14d\") " pod="openstack/memcached-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.125334 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fc84d7-b01c-4396-89e2-54684791a14d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e3fc84d7-b01c-4396-89e2-54684791a14d\") " pod="openstack/memcached-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.125353 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3fc84d7-b01c-4396-89e2-54684791a14d-kolla-config\") pod \"memcached-0\" (UID: \"e3fc84d7-b01c-4396-89e2-54684791a14d\") " pod="openstack/memcached-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.125373 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3fc84d7-b01c-4396-89e2-54684791a14d-config-data\") pod \"memcached-0\" (UID: \"e3fc84d7-b01c-4396-89e2-54684791a14d\") " pod="openstack/memcached-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.151126 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.227282 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nnbm\" (UniqueName: \"kubernetes.io/projected/e3fc84d7-b01c-4396-89e2-54684791a14d-kube-api-access-7nnbm\") pod \"memcached-0\" (UID: \"e3fc84d7-b01c-4396-89e2-54684791a14d\") " pod="openstack/memcached-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.227323 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fc84d7-b01c-4396-89e2-54684791a14d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e3fc84d7-b01c-4396-89e2-54684791a14d\") " pod="openstack/memcached-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.227343 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fc84d7-b01c-4396-89e2-54684791a14d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e3fc84d7-b01c-4396-89e2-54684791a14d\") " pod="openstack/memcached-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.227360 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3fc84d7-b01c-4396-89e2-54684791a14d-kolla-config\") pod \"memcached-0\" (UID: \"e3fc84d7-b01c-4396-89e2-54684791a14d\") " pod="openstack/memcached-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.227379 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3fc84d7-b01c-4396-89e2-54684791a14d-config-data\") pod \"memcached-0\" (UID: \"e3fc84d7-b01c-4396-89e2-54684791a14d\") " pod="openstack/memcached-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.228098 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3fc84d7-b01c-4396-89e2-54684791a14d-config-data\") pod \"memcached-0\" (UID: \"e3fc84d7-b01c-4396-89e2-54684791a14d\") " pod="openstack/memcached-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.251415 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3fc84d7-b01c-4396-89e2-54684791a14d-kolla-config\") pod \"memcached-0\" (UID: \"e3fc84d7-b01c-4396-89e2-54684791a14d\") " pod="openstack/memcached-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.251766 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3fc84d7-b01c-4396-89e2-54684791a14d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e3fc84d7-b01c-4396-89e2-54684791a14d\") " pod="openstack/memcached-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.252189 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fc84d7-b01c-4396-89e2-54684791a14d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e3fc84d7-b01c-4396-89e2-54684791a14d\") " pod="openstack/memcached-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.276412 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nnbm\" (UniqueName: \"kubernetes.io/projected/e3fc84d7-b01c-4396-89e2-54684791a14d-kube-api-access-7nnbm\") pod \"memcached-0\" (UID: \"e3fc84d7-b01c-4396-89e2-54684791a14d\") " pod="openstack/memcached-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.303945 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.352059 4730 generic.go:334] "Generic (PLEG): container finished" podID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerID="d31bd001ee74e3469a2749b923f42adb83a31cb422ef5d9b45febe42584ea0e1" exitCode=0 Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.352099 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerDied","Data":"d31bd001ee74e3469a2749b923f42adb83a31cb422ef5d9b45febe42584ea0e1"} Jan 31 16:44:27 crc kubenswrapper[4730]: I0131 16:44:27.352131 4730 scope.go:117] "RemoveContainer" containerID="81c316c56ff641f78d1454bdb69055b2cc577488dee85bfffb222944d2c0456f" Jan 31 16:44:28 crc kubenswrapper[4730]: I0131 16:44:28.922159 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 16:44:28 crc kubenswrapper[4730]: I0131 16:44:28.923191 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 16:44:28 crc kubenswrapper[4730]: I0131 16:44:28.924649 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-q2nwp" Jan 31 16:44:28 crc kubenswrapper[4730]: I0131 16:44:28.933753 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 16:44:29 crc kubenswrapper[4730]: I0131 16:44:29.060018 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t54gm\" (UniqueName: \"kubernetes.io/projected/7a3833dd-076f-425d-bcf2-05c52520be71-kube-api-access-t54gm\") pod \"kube-state-metrics-0\" (UID: \"7a3833dd-076f-425d-bcf2-05c52520be71\") " pod="openstack/kube-state-metrics-0" Jan 31 16:44:29 crc kubenswrapper[4730]: I0131 16:44:29.161691 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t54gm\" (UniqueName: \"kubernetes.io/projected/7a3833dd-076f-425d-bcf2-05c52520be71-kube-api-access-t54gm\") pod \"kube-state-metrics-0\" (UID: \"7a3833dd-076f-425d-bcf2-05c52520be71\") " pod="openstack/kube-state-metrics-0" Jan 31 16:44:29 crc kubenswrapper[4730]: I0131 16:44:29.178637 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t54gm\" (UniqueName: \"kubernetes.io/projected/7a3833dd-076f-425d-bcf2-05c52520be71-kube-api-access-t54gm\") pod \"kube-state-metrics-0\" (UID: \"7a3833dd-076f-425d-bcf2-05c52520be71\") " pod="openstack/kube-state-metrics-0" Jan 31 16:44:29 crc kubenswrapper[4730]: I0131 16:44:29.247089 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 16:44:30 crc kubenswrapper[4730]: I0131 16:44:30.376639 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda","Type":"ContainerStarted","Data":"b8628332f03bd987d7f39f49d51701fe6d023505db7d3543922c2d718682a345"} Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.133665 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gbpkm"] Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.135290 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.139167 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-kzvnn" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.139385 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.143812 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.150880 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-88h7f"] Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.152621 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.171564 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gbpkm"] Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.175780 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-88h7f"] Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.232537 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-var-run\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.232599 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-etc-ovs\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.232673 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b59c538-9f79-4e4e-9d74-6eb5f1758795-ovn-controller-tls-certs\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.232698 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-var-log\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.232714 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz6h2\" (UniqueName: \"kubernetes.io/projected/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-kube-api-access-rz6h2\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.232738 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-scripts\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.232753 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1b59c538-9f79-4e4e-9d74-6eb5f1758795-var-run\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.232773 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b59c538-9f79-4e4e-9d74-6eb5f1758795-scripts\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.232795 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-var-lib\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.232863 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1b59c538-9f79-4e4e-9d74-6eb5f1758795-var-log-ovn\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.232882 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csstb\" (UniqueName: \"kubernetes.io/projected/1b59c538-9f79-4e4e-9d74-6eb5f1758795-kube-api-access-csstb\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.232898 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1b59c538-9f79-4e4e-9d74-6eb5f1758795-var-run-ovn\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.232921 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b59c538-9f79-4e4e-9d74-6eb5f1758795-combined-ca-bundle\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.334490 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b59c538-9f79-4e4e-9d74-6eb5f1758795-ovn-controller-tls-certs\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.334536 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-var-log\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.334556 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz6h2\" (UniqueName: \"kubernetes.io/projected/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-kube-api-access-rz6h2\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.334580 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-scripts\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.334596 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1b59c538-9f79-4e4e-9d74-6eb5f1758795-var-run\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.334618 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b59c538-9f79-4e4e-9d74-6eb5f1758795-scripts\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.334646 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-var-lib\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.334676 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1b59c538-9f79-4e4e-9d74-6eb5f1758795-var-log-ovn\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.334696 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csstb\" (UniqueName: \"kubernetes.io/projected/1b59c538-9f79-4e4e-9d74-6eb5f1758795-kube-api-access-csstb\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.334712 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1b59c538-9f79-4e4e-9d74-6eb5f1758795-var-run-ovn\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.334731 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b59c538-9f79-4e4e-9d74-6eb5f1758795-combined-ca-bundle\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.334776 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-var-run\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.334817 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-etc-ovs\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.335203 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-var-run\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.335264 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-etc-ovs\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.335360 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1b59c538-9f79-4e4e-9d74-6eb5f1758795-var-run-ovn\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.335472 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-var-lib\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.335575 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1b59c538-9f79-4e4e-9d74-6eb5f1758795-var-log-ovn\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.335608 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1b59c538-9f79-4e4e-9d74-6eb5f1758795-var-run\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.336959 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b59c538-9f79-4e4e-9d74-6eb5f1758795-scripts\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.337074 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-var-log\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.337442 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-scripts\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.343776 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b59c538-9f79-4e4e-9d74-6eb5f1758795-ovn-controller-tls-certs\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.346297 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b59c538-9f79-4e4e-9d74-6eb5f1758795-combined-ca-bundle\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.362787 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csstb\" (UniqueName: \"kubernetes.io/projected/1b59c538-9f79-4e4e-9d74-6eb5f1758795-kube-api-access-csstb\") pod \"ovn-controller-gbpkm\" (UID: \"1b59c538-9f79-4e4e-9d74-6eb5f1758795\") " pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.375532 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz6h2\" (UniqueName: \"kubernetes.io/projected/0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6-kube-api-access-rz6h2\") pod \"ovn-controller-ovs-88h7f\" (UID: \"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6\") " pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.463141 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.489882 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.995569 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 16:44:33 crc kubenswrapper[4730]: I0131 16:44:33.998106 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.003065 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.003312 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-bfqmz" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.003597 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.003793 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.003923 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.011174 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.153348 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-config\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.153427 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s7t2\" (UniqueName: \"kubernetes.io/projected/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-kube-api-access-5s7t2\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.153490 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.153528 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.153566 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.153712 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.153761 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.153943 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.256220 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.256291 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.256365 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.256399 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-config\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.256422 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s7t2\" (UniqueName: \"kubernetes.io/projected/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-kube-api-access-5s7t2\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.256458 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.256483 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.256512 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.256743 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.257293 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.258131 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-config\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.259167 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.260976 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.261126 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.261599 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.277311 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.283102 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s7t2\" (UniqueName: \"kubernetes.io/projected/8dcfa71d-54ed-4415-92cf-0dd4133a5c96-kube-api-access-5s7t2\") pod \"ovsdbserver-nb-0\" (UID: \"8dcfa71d-54ed-4415-92cf-0dd4133a5c96\") " pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:34 crc kubenswrapper[4730]: I0131 16:44:34.359250 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.676833 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.678361 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.689638 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.689841 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.690083 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.690204 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-fkwzq" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.717586 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.811685 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.811738 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.811776 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.811980 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.812061 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.812412 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-config\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.812461 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.812491 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gq6f\" (UniqueName: \"kubernetes.io/projected/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-kube-api-access-9gq6f\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.914029 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.914081 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.914120 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.914148 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.914165 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.914211 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-config\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.914232 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.914246 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gq6f\" (UniqueName: \"kubernetes.io/projected/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-kube-api-access-9gq6f\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.915566 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.915764 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.916792 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-config\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.917400 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.922346 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.930099 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gq6f\" (UniqueName: \"kubernetes.io/projected/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-kube-api-access-9gq6f\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.932442 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.935162 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b07dc548-3987-41f8-89d8-ca3f94e1b0c1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:36 crc kubenswrapper[4730]: I0131 16:44:36.936089 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b07dc548-3987-41f8-89d8-ca3f94e1b0c1\") " pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:37 crc kubenswrapper[4730]: I0131 16:44:37.021907 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:41 crc kubenswrapper[4730]: E0131 16:44:41.044889 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 16:44:41 crc kubenswrapper[4730]: E0131 16:44:41.045480 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxd6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-5mhxq_openstack(c6c05f77-50d7-4933-aca0-45a255bbd253): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:44:41 crc kubenswrapper[4730]: E0131 16:44:41.046713 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" podUID="c6c05f77-50d7-4933-aca0-45a255bbd253" Jan 31 16:44:41 crc kubenswrapper[4730]: E0131 16:44:41.121956 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 16:44:41 crc kubenswrapper[4730]: E0131 16:44:41.122223 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8sml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-w2p7z_openstack(3474afd1-6842-4df8-a16f-cfb2070714ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:44:41 crc kubenswrapper[4730]: E0131 16:44:41.123421 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-w2p7z" podUID="3474afd1-6842-4df8-a16f-cfb2070714ab" Jan 31 16:44:41 crc kubenswrapper[4730]: E0131 16:44:41.168877 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 16:44:41 crc kubenswrapper[4730]: E0131 16:44:41.169219 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bffmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-c7z7r_openstack(f6313619-1a77-4a9b-bc70-177cd533b738): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:44:41 crc kubenswrapper[4730]: E0131 16:44:41.171918 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-c7z7r" podUID="f6313619-1a77-4a9b-bc70-177cd533b738" Jan 31 16:44:41 crc kubenswrapper[4730]: E0131 16:44:41.188049 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 16:44:41 crc kubenswrapper[4730]: E0131 16:44:41.188238 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xh7gw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-8ws2c_openstack(98fb2db2-b72a-4c8a-94ba-08e1567ba221): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:44:41 crc kubenswrapper[4730]: E0131 16:44:41.189411 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" podUID="98fb2db2-b72a-4c8a-94ba-08e1567ba221" Jan 31 16:44:41 crc kubenswrapper[4730]: I0131 16:44:41.463857 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerStarted","Data":"9edfe6ca891dac90613c7fe072627dce26dbef80751209cf3e40ccba97010f80"} Jan 31 16:44:41 crc kubenswrapper[4730]: E0131 16:44:41.468262 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" podUID="c6c05f77-50d7-4933-aca0-45a255bbd253" Jan 31 16:44:41 crc kubenswrapper[4730]: E0131 16:44:41.468493 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" podUID="98fb2db2-b72a-4c8a-94ba-08e1567ba221" Jan 31 16:44:41 crc kubenswrapper[4730]: I0131 16:44:41.620265 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 16:44:41 crc kubenswrapper[4730]: I0131 16:44:41.644170 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 16:44:41 crc kubenswrapper[4730]: I0131 16:44:41.731229 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 16:44:41 crc kubenswrapper[4730]: I0131 16:44:41.760624 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 16:44:41 crc kubenswrapper[4730]: W0131 16:44:41.771813 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod532c157a_5c9c_4043_a85a_5075e5ed9db5.slice/crio-847fa8139f6eac1a0fa598aa8abebf7597e023360571b28441c91221f3151dcb WatchSource:0}: Error finding container 847fa8139f6eac1a0fa598aa8abebf7597e023360571b28441c91221f3151dcb: Status 404 returned error can't find the container with id 847fa8139f6eac1a0fa598aa8abebf7597e023360571b28441c91221f3151dcb Jan 31 16:44:41 crc kubenswrapper[4730]: I0131 16:44:41.773103 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 16:44:41 crc kubenswrapper[4730]: I0131 16:44:41.996087 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-c7z7r" Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.005626 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-w2p7z" Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.123263 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6313619-1a77-4a9b-bc70-177cd533b738-config\") pod \"f6313619-1a77-4a9b-bc70-177cd533b738\" (UID: \"f6313619-1a77-4a9b-bc70-177cd533b738\") " Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.123315 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bffmv\" (UniqueName: \"kubernetes.io/projected/f6313619-1a77-4a9b-bc70-177cd533b738-kube-api-access-bffmv\") pod \"f6313619-1a77-4a9b-bc70-177cd533b738\" (UID: \"f6313619-1a77-4a9b-bc70-177cd533b738\") " Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.123372 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6313619-1a77-4a9b-bc70-177cd533b738-dns-svc\") pod \"f6313619-1a77-4a9b-bc70-177cd533b738\" (UID: \"f6313619-1a77-4a9b-bc70-177cd533b738\") " Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.123411 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8sml\" (UniqueName: \"kubernetes.io/projected/3474afd1-6842-4df8-a16f-cfb2070714ab-kube-api-access-x8sml\") pod \"3474afd1-6842-4df8-a16f-cfb2070714ab\" (UID: \"3474afd1-6842-4df8-a16f-cfb2070714ab\") " Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.123438 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3474afd1-6842-4df8-a16f-cfb2070714ab-config\") pod \"3474afd1-6842-4df8-a16f-cfb2070714ab\" (UID: \"3474afd1-6842-4df8-a16f-cfb2070714ab\") " Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.123823 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6313619-1a77-4a9b-bc70-177cd533b738-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f6313619-1a77-4a9b-bc70-177cd533b738" (UID: "f6313619-1a77-4a9b-bc70-177cd533b738"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.123959 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3474afd1-6842-4df8-a16f-cfb2070714ab-config" (OuterVolumeSpecName: "config") pod "3474afd1-6842-4df8-a16f-cfb2070714ab" (UID: "3474afd1-6842-4df8-a16f-cfb2070714ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.124057 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6313619-1a77-4a9b-bc70-177cd533b738-config" (OuterVolumeSpecName: "config") pod "f6313619-1a77-4a9b-bc70-177cd533b738" (UID: "f6313619-1a77-4a9b-bc70-177cd533b738"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.164663 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.174354 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3474afd1-6842-4df8-a16f-cfb2070714ab-kube-api-access-x8sml" (OuterVolumeSpecName: "kube-api-access-x8sml") pod "3474afd1-6842-4df8-a16f-cfb2070714ab" (UID: "3474afd1-6842-4df8-a16f-cfb2070714ab"). InnerVolumeSpecName "kube-api-access-x8sml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.174529 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6313619-1a77-4a9b-bc70-177cd533b738-kube-api-access-bffmv" (OuterVolumeSpecName: "kube-api-access-bffmv") pod "f6313619-1a77-4a9b-bc70-177cd533b738" (UID: "f6313619-1a77-4a9b-bc70-177cd533b738"). InnerVolumeSpecName "kube-api-access-bffmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.180223 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gbpkm"] Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.225698 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6313619-1a77-4a9b-bc70-177cd533b738-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.225747 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bffmv\" (UniqueName: \"kubernetes.io/projected/f6313619-1a77-4a9b-bc70-177cd533b738-kube-api-access-bffmv\") on node \"crc\" DevicePath \"\"" Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.225763 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6313619-1a77-4a9b-bc70-177cd533b738-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.225777 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8sml\" (UniqueName: \"kubernetes.io/projected/3474afd1-6842-4df8-a16f-cfb2070714ab-kube-api-access-x8sml\") on node \"crc\" DevicePath \"\"" Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.225825 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3474afd1-6842-4df8-a16f-cfb2070714ab-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.459699 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.483979 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-w2p7z" Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.484301 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-w2p7z" event={"ID":"3474afd1-6842-4df8-a16f-cfb2070714ab","Type":"ContainerDied","Data":"8acd50c2bc94abb5bf3f77138fdf08d10e42a902056d81e503b8c525bcefe7b0"} Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.486365 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"532c157a-5c9c-4043-a85a-5075e5ed9db5","Type":"ContainerStarted","Data":"847fa8139f6eac1a0fa598aa8abebf7597e023360571b28441c91221f3151dcb"} Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.491756 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"696f3c30-383d-4a98-ab73-bd90571c8fac","Type":"ContainerStarted","Data":"738a8d90e084eb51aaac5fdb57a21702bfa49a9561dabb0e0f201a2592aebfc6"} Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.494425 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-c7z7r" event={"ID":"f6313619-1a77-4a9b-bc70-177cd533b738","Type":"ContainerDied","Data":"99618bcdb0b31619b2aa3fdb320785e1336ea382d2888e871857a2fb7a490426"} Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.494510 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-c7z7r" Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.497381 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7a3833dd-076f-425d-bcf2-05c52520be71","Type":"ContainerStarted","Data":"0def8b0565011b24ff82ffe1441ff64a54478fdfbffa201bc16b28f7281857b2"} Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.498337 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e3fc84d7-b01c-4396-89e2-54684791a14d","Type":"ContainerStarted","Data":"abb63b7be11d488f8a2cc295abca07dbcd1daedfcd87f3515b80e2d2bea53dfe"} Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.501030 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda","Type":"ContainerStarted","Data":"4d382f6bf7143cdd0df6cad985283e26d4208dccdf17de8175042fb502777962"} Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.528025 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gbpkm" event={"ID":"1b59c538-9f79-4e4e-9d74-6eb5f1758795","Type":"ContainerStarted","Data":"16564c1ebb1f0559ca6fc934f13889113add8b6b4a6319ea6a98a21e6aba8d13"} Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.534905 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f96d233a-2c8a-4873-b53b-eb8c3e792160","Type":"ContainerStarted","Data":"dd286e9987f76a20f427c366762088ca38f8cd84d428d6f4785a99476edabc57"} Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.593413 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7z7r"] Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.612739 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7z7r"] Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.651099 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-w2p7z"] Jan 31 16:44:42 crc kubenswrapper[4730]: I0131 16:44:42.681581 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-w2p7z"] Jan 31 16:44:43 crc kubenswrapper[4730]: I0131 16:44:43.120766 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-88h7f"] Jan 31 16:44:43 crc kubenswrapper[4730]: I0131 16:44:43.365352 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 16:44:43 crc kubenswrapper[4730]: I0131 16:44:43.546026 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8dcfa71d-54ed-4415-92cf-0dd4133a5c96","Type":"ContainerStarted","Data":"ccd2973acdad327ced5532e7fd63525b9cbcf67a841df363b2b7f6b93338ed73"} Jan 31 16:44:43 crc kubenswrapper[4730]: I0131 16:44:43.548031 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"696f3c30-383d-4a98-ab73-bd90571c8fac","Type":"ContainerStarted","Data":"a6cc9a21447d28c0904a7fed6d8eda4afbac81c2529b1fa12165f1a5533ad371"} Jan 31 16:44:43 crc kubenswrapper[4730]: W0131 16:44:43.944690 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb07dc548_3987_41f8_89d8_ca3f94e1b0c1.slice/crio-f2f64f316f509478410f79aa643400568fbaabff4640e8a613960c93bc7a5790 WatchSource:0}: Error finding container f2f64f316f509478410f79aa643400568fbaabff4640e8a613960c93bc7a5790: Status 404 returned error can't find the container with id f2f64f316f509478410f79aa643400568fbaabff4640e8a613960c93bc7a5790 Jan 31 16:44:44 crc kubenswrapper[4730]: I0131 16:44:44.474149 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3474afd1-6842-4df8-a16f-cfb2070714ab" path="/var/lib/kubelet/pods/3474afd1-6842-4df8-a16f-cfb2070714ab/volumes" Jan 31 16:44:44 crc kubenswrapper[4730]: I0131 16:44:44.474818 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6313619-1a77-4a9b-bc70-177cd533b738" path="/var/lib/kubelet/pods/f6313619-1a77-4a9b-bc70-177cd533b738/volumes" Jan 31 16:44:44 crc kubenswrapper[4730]: I0131 16:44:44.578318 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-88h7f" event={"ID":"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6","Type":"ContainerStarted","Data":"9918e516809ccb7743677ba223c3a2f0b74328c6ad75ce8500488d942d8024ac"} Jan 31 16:44:44 crc kubenswrapper[4730]: I0131 16:44:44.584091 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b07dc548-3987-41f8-89d8-ca3f94e1b0c1","Type":"ContainerStarted","Data":"f2f64f316f509478410f79aa643400568fbaabff4640e8a613960c93bc7a5790"} Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.635083 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8dcfa71d-54ed-4415-92cf-0dd4133a5c96","Type":"ContainerStarted","Data":"5fa69563fbe2dd58a87a400040fc6a4f0093a7c963930bc0e81e5122d70e3f80"} Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.640157 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"532c157a-5c9c-4043-a85a-5075e5ed9db5","Type":"ContainerStarted","Data":"3bf35315d0791917507533f7f401e697133c074aecd1c74f5017a86da4507459"} Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.642863 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b07dc548-3987-41f8-89d8-ca3f94e1b0c1","Type":"ContainerStarted","Data":"dbcc214015dcd214517b06171fd664e687a4defb85a60da78f2c030d1aaea111"} Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.647565 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7a3833dd-076f-425d-bcf2-05c52520be71","Type":"ContainerStarted","Data":"61c4a0d50c86d71c7603e6381f772dbd0eeff5a552970482fb6115b0c3bf213b"} Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.648414 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.651029 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gbpkm" event={"ID":"1b59c538-9f79-4e4e-9d74-6eb5f1758795","Type":"ContainerStarted","Data":"06b0f6e1a42cc3b4f1f6ce8017b59679c94afae36b4ee70b43fb7c3f1b4104ed"} Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.651214 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-gbpkm" Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.669830 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f96d233a-2c8a-4873-b53b-eb8c3e792160","Type":"ContainerStarted","Data":"9141dd9ad4a702b2eafee1908c695a354ce88f8fbebfcde5bffc55b49b906648"} Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.671676 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e3fc84d7-b01c-4396-89e2-54684791a14d","Type":"ContainerStarted","Data":"32221426d1f4644ac16de3869b56bf8c2db0543a6e16d74a92836c8706df68d7"} Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.672197 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.673290 4730 generic.go:334] "Generic (PLEG): container finished" podID="0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6" containerID="7952cdaec625cf0cdab52e5fe56db284e0188a42888a580244ba5cf316b515b9" exitCode=0 Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.673336 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-88h7f" event={"ID":"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6","Type":"ContainerDied","Data":"7952cdaec625cf0cdab52e5fe56db284e0188a42888a580244ba5cf316b515b9"} Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.690416 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=14.872649537000001 podStartE2EDuration="22.690401805s" podCreationTimestamp="2026-01-31 16:44:28 +0000 UTC" firstStartedPulling="2026-01-31 16:44:41.779531462 +0000 UTC m=+868.585588378" lastFinishedPulling="2026-01-31 16:44:49.59728373 +0000 UTC m=+876.403340646" observedRunningTime="2026-01-31 16:44:50.67811577 +0000 UTC m=+877.484172686" watchObservedRunningTime="2026-01-31 16:44:50.690401805 +0000 UTC m=+877.496458721" Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.697651 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gbpkm" podStartSLOduration=10.25760007 podStartE2EDuration="17.697635295s" podCreationTimestamp="2026-01-31 16:44:33 +0000 UTC" firstStartedPulling="2026-01-31 16:44:42.182187846 +0000 UTC m=+868.988244762" lastFinishedPulling="2026-01-31 16:44:49.622223051 +0000 UTC m=+876.428279987" observedRunningTime="2026-01-31 16:44:50.697495172 +0000 UTC m=+877.503552088" watchObservedRunningTime="2026-01-31 16:44:50.697635295 +0000 UTC m=+877.503692211" Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.747295 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=21.851027172 podStartE2EDuration="24.747273419s" podCreationTimestamp="2026-01-31 16:44:26 +0000 UTC" firstStartedPulling="2026-01-31 16:44:41.64395326 +0000 UTC m=+868.450010166" lastFinishedPulling="2026-01-31 16:44:44.540199497 +0000 UTC m=+871.346256413" observedRunningTime="2026-01-31 16:44:50.743917526 +0000 UTC m=+877.549974432" watchObservedRunningTime="2026-01-31 16:44:50.747273419 +0000 UTC m=+877.553330335" Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.961840 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r2g75"] Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.965145 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r2g75" Jan 31 16:44:50 crc kubenswrapper[4730]: I0131 16:44:50.993672 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r2g75"] Jan 31 16:44:51 crc kubenswrapper[4730]: I0131 16:44:51.100829 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-utilities\") pod \"community-operators-r2g75\" (UID: \"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9\") " pod="openshift-marketplace/community-operators-r2g75" Jan 31 16:44:51 crc kubenswrapper[4730]: I0131 16:44:51.100891 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-catalog-content\") pod \"community-operators-r2g75\" (UID: \"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9\") " pod="openshift-marketplace/community-operators-r2g75" Jan 31 16:44:51 crc kubenswrapper[4730]: I0131 16:44:51.101052 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm9h6\" (UniqueName: \"kubernetes.io/projected/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-kube-api-access-lm9h6\") pod \"community-operators-r2g75\" (UID: \"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9\") " pod="openshift-marketplace/community-operators-r2g75" Jan 31 16:44:51 crc kubenswrapper[4730]: I0131 16:44:51.203678 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm9h6\" (UniqueName: \"kubernetes.io/projected/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-kube-api-access-lm9h6\") pod \"community-operators-r2g75\" (UID: \"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9\") " pod="openshift-marketplace/community-operators-r2g75" Jan 31 16:44:51 crc kubenswrapper[4730]: I0131 16:44:51.203716 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-utilities\") pod \"community-operators-r2g75\" (UID: \"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9\") " pod="openshift-marketplace/community-operators-r2g75" Jan 31 16:44:51 crc kubenswrapper[4730]: I0131 16:44:51.203733 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-catalog-content\") pod \"community-operators-r2g75\" (UID: \"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9\") " pod="openshift-marketplace/community-operators-r2g75" Jan 31 16:44:51 crc kubenswrapper[4730]: I0131 16:44:51.204306 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-utilities\") pod \"community-operators-r2g75\" (UID: \"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9\") " pod="openshift-marketplace/community-operators-r2g75" Jan 31 16:44:51 crc kubenswrapper[4730]: I0131 16:44:51.204769 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-catalog-content\") pod \"community-operators-r2g75\" (UID: \"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9\") " pod="openshift-marketplace/community-operators-r2g75" Jan 31 16:44:51 crc kubenswrapper[4730]: I0131 16:44:51.223267 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm9h6\" (UniqueName: \"kubernetes.io/projected/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-kube-api-access-lm9h6\") pod \"community-operators-r2g75\" (UID: \"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9\") " pod="openshift-marketplace/community-operators-r2g75" Jan 31 16:44:51 crc kubenswrapper[4730]: I0131 16:44:51.303836 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r2g75" Jan 31 16:44:51 crc kubenswrapper[4730]: I0131 16:44:51.686741 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-88h7f" event={"ID":"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6","Type":"ContainerStarted","Data":"2c59e7f3a6082ce60d7259001c4470f463458413901153d74ea911b0d952407c"} Jan 31 16:44:51 crc kubenswrapper[4730]: I0131 16:44:51.687062 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-88h7f" event={"ID":"0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6","Type":"ContainerStarted","Data":"14705fbc1cf34d0d77e37eae444d6d8a4873749e19b25c616c5ca1fa0b02bea1"} Jan 31 16:44:51 crc kubenswrapper[4730]: I0131 16:44:51.687644 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:51 crc kubenswrapper[4730]: I0131 16:44:51.687683 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:44:51 crc kubenswrapper[4730]: I0131 16:44:51.714393 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-88h7f" podStartSLOduration=13.054913376 podStartE2EDuration="18.71437686s" podCreationTimestamp="2026-01-31 16:44:33 +0000 UTC" firstStartedPulling="2026-01-31 16:44:43.96530903 +0000 UTC m=+870.771365946" lastFinishedPulling="2026-01-31 16:44:49.624772514 +0000 UTC m=+876.430829430" observedRunningTime="2026-01-31 16:44:51.708289719 +0000 UTC m=+878.514346635" watchObservedRunningTime="2026-01-31 16:44:51.71437686 +0000 UTC m=+878.520433776" Jan 31 16:44:52 crc kubenswrapper[4730]: W0131 16:44:52.606174 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8d1c9b3_f51e_4f3f_a2f4_e7d7a43f17a9.slice/crio-493a09d06ba5b8a509563b7cbafdc69915276ee02c5c5738e83ffa8c3e431673 WatchSource:0}: Error finding container 493a09d06ba5b8a509563b7cbafdc69915276ee02c5c5738e83ffa8c3e431673: Status 404 returned error can't find the container with id 493a09d06ba5b8a509563b7cbafdc69915276ee02c5c5738e83ffa8c3e431673 Jan 31 16:44:52 crc kubenswrapper[4730]: I0131 16:44:52.613387 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r2g75"] Jan 31 16:44:52 crc kubenswrapper[4730]: I0131 16:44:52.695979 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b07dc548-3987-41f8-89d8-ca3f94e1b0c1","Type":"ContainerStarted","Data":"7fcdbf7b7486c186f64237ccae85c92e5990c94e30efb6d38ada9dffde3ec4af"} Jan 31 16:44:52 crc kubenswrapper[4730]: I0131 16:44:52.699920 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8dcfa71d-54ed-4415-92cf-0dd4133a5c96","Type":"ContainerStarted","Data":"aaf3eafb24522980d3ba34a61b66856bcf9583055bf7c04eb1d8e98f3124544e"} Jan 31 16:44:52 crc kubenswrapper[4730]: I0131 16:44:52.701441 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2g75" event={"ID":"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9","Type":"ContainerStarted","Data":"493a09d06ba5b8a509563b7cbafdc69915276ee02c5c5738e83ffa8c3e431673"} Jan 31 16:44:52 crc kubenswrapper[4730]: I0131 16:44:52.716479 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.474479021 podStartE2EDuration="17.71646502s" podCreationTimestamp="2026-01-31 16:44:35 +0000 UTC" firstStartedPulling="2026-01-31 16:44:43.953317132 +0000 UTC m=+870.759374048" lastFinishedPulling="2026-01-31 16:44:52.195303121 +0000 UTC m=+879.001360047" observedRunningTime="2026-01-31 16:44:52.715722672 +0000 UTC m=+879.521779608" watchObservedRunningTime="2026-01-31 16:44:52.71646502 +0000 UTC m=+879.522521936" Jan 31 16:44:52 crc kubenswrapper[4730]: I0131 16:44:52.744449 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.171685703 podStartE2EDuration="20.744418476s" podCreationTimestamp="2026-01-31 16:44:32 +0000 UTC" firstStartedPulling="2026-01-31 16:44:42.631629653 +0000 UTC m=+869.437686569" lastFinishedPulling="2026-01-31 16:44:52.204362426 +0000 UTC m=+879.010419342" observedRunningTime="2026-01-31 16:44:52.73132321 +0000 UTC m=+879.537380126" watchObservedRunningTime="2026-01-31 16:44:52.744418476 +0000 UTC m=+879.550475422" Jan 31 16:44:53 crc kubenswrapper[4730]: I0131 16:44:53.710062 4730 generic.go:334] "Generic (PLEG): container finished" podID="c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9" containerID="4b6fadd823c9d13b94793c06366ea4163d92b71fa49264142cb337d09feb29c5" exitCode=0 Jan 31 16:44:53 crc kubenswrapper[4730]: I0131 16:44:53.710276 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2g75" event={"ID":"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9","Type":"ContainerDied","Data":"4b6fadd823c9d13b94793c06366ea4163d92b71fa49264142cb337d09feb29c5"} Jan 31 16:44:53 crc kubenswrapper[4730]: I0131 16:44:53.713825 4730 generic.go:334] "Generic (PLEG): container finished" podID="532c157a-5c9c-4043-a85a-5075e5ed9db5" containerID="3bf35315d0791917507533f7f401e697133c074aecd1c74f5017a86da4507459" exitCode=0 Jan 31 16:44:53 crc kubenswrapper[4730]: I0131 16:44:53.713871 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"532c157a-5c9c-4043-a85a-5075e5ed9db5","Type":"ContainerDied","Data":"3bf35315d0791917507533f7f401e697133c074aecd1c74f5017a86da4507459"} Jan 31 16:44:53 crc kubenswrapper[4730]: I0131 16:44:53.723345 4730 generic.go:334] "Generic (PLEG): container finished" podID="f96d233a-2c8a-4873-b53b-eb8c3e792160" containerID="9141dd9ad4a702b2eafee1908c695a354ce88f8fbebfcde5bffc55b49b906648" exitCode=0 Jan 31 16:44:53 crc kubenswrapper[4730]: I0131 16:44:53.724059 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f96d233a-2c8a-4873-b53b-eb8c3e792160","Type":"ContainerDied","Data":"9141dd9ad4a702b2eafee1908c695a354ce88f8fbebfcde5bffc55b49b906648"} Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.163116 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-66hvq"] Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.164606 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66hvq" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.192301 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-66hvq"] Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.255824 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a81eb20f-04f9-4f66-b19a-19cd06c28329-catalog-content\") pod \"redhat-operators-66hvq\" (UID: \"a81eb20f-04f9-4f66-b19a-19cd06c28329\") " pod="openshift-marketplace/redhat-operators-66hvq" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.255909 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cczx\" (UniqueName: \"kubernetes.io/projected/a81eb20f-04f9-4f66-b19a-19cd06c28329-kube-api-access-7cczx\") pod \"redhat-operators-66hvq\" (UID: \"a81eb20f-04f9-4f66-b19a-19cd06c28329\") " pod="openshift-marketplace/redhat-operators-66hvq" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.255971 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a81eb20f-04f9-4f66-b19a-19cd06c28329-utilities\") pod \"redhat-operators-66hvq\" (UID: \"a81eb20f-04f9-4f66-b19a-19cd06c28329\") " pod="openshift-marketplace/redhat-operators-66hvq" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.357689 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cczx\" (UniqueName: \"kubernetes.io/projected/a81eb20f-04f9-4f66-b19a-19cd06c28329-kube-api-access-7cczx\") pod \"redhat-operators-66hvq\" (UID: \"a81eb20f-04f9-4f66-b19a-19cd06c28329\") " pod="openshift-marketplace/redhat-operators-66hvq" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.357816 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a81eb20f-04f9-4f66-b19a-19cd06c28329-utilities\") pod \"redhat-operators-66hvq\" (UID: \"a81eb20f-04f9-4f66-b19a-19cd06c28329\") " pod="openshift-marketplace/redhat-operators-66hvq" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.357856 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a81eb20f-04f9-4f66-b19a-19cd06c28329-catalog-content\") pod \"redhat-operators-66hvq\" (UID: \"a81eb20f-04f9-4f66-b19a-19cd06c28329\") " pod="openshift-marketplace/redhat-operators-66hvq" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.358350 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a81eb20f-04f9-4f66-b19a-19cd06c28329-utilities\") pod \"redhat-operators-66hvq\" (UID: \"a81eb20f-04f9-4f66-b19a-19cd06c28329\") " pod="openshift-marketplace/redhat-operators-66hvq" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.358388 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a81eb20f-04f9-4f66-b19a-19cd06c28329-catalog-content\") pod \"redhat-operators-66hvq\" (UID: \"a81eb20f-04f9-4f66-b19a-19cd06c28329\") " pod="openshift-marketplace/redhat-operators-66hvq" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.360120 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.382988 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cczx\" (UniqueName: \"kubernetes.io/projected/a81eb20f-04f9-4f66-b19a-19cd06c28329-kube-api-access-7cczx\") pod \"redhat-operators-66hvq\" (UID: \"a81eb20f-04f9-4f66-b19a-19cd06c28329\") " pod="openshift-marketplace/redhat-operators-66hvq" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.487506 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66hvq" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.745986 4730 generic.go:334] "Generic (PLEG): container finished" podID="c6c05f77-50d7-4933-aca0-45a255bbd253" containerID="07638b641e2aef0d0fc1e892383da86469effd9e7e4d941c7894b904a1287eae" exitCode=0 Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.746266 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" event={"ID":"c6c05f77-50d7-4933-aca0-45a255bbd253","Type":"ContainerDied","Data":"07638b641e2aef0d0fc1e892383da86469effd9e7e4d941c7894b904a1287eae"} Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.748850 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wnc5j"] Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.751119 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wnc5j" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.753213 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f96d233a-2c8a-4873-b53b-eb8c3e792160","Type":"ContainerStarted","Data":"f5d8cc3df134a95e44ce9859a3da6c27f57190ff210124fd391e219d4aeef0f3"} Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.760015 4730 generic.go:334] "Generic (PLEG): container finished" podID="98fb2db2-b72a-4c8a-94ba-08e1567ba221" containerID="41b6ce30ee1e3f95c70ee9a8eced57d9c5c6a1ae816cf4ac75f606a08e8baf3d" exitCode=0 Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.760075 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" event={"ID":"98fb2db2-b72a-4c8a-94ba-08e1567ba221","Type":"ContainerDied","Data":"41b6ce30ee1e3f95c70ee9a8eced57d9c5c6a1ae816cf4ac75f606a08e8baf3d"} Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.765361 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wnc5j"] Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.777884 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2g75" event={"ID":"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9","Type":"ContainerStarted","Data":"a6494b5417e344d46740e3882590bc379665a20495f2ff07307610a4c7f3354c"} Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.782877 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"532c157a-5c9c-4043-a85a-5075e5ed9db5","Type":"ContainerStarted","Data":"cbc6a9165a44e28aad4d49d29154899eec8aa75a6d2d0aa3d358b65ed190f9bb"} Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.806469 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=23.36917336 podStartE2EDuration="30.806449196s" podCreationTimestamp="2026-01-31 16:44:24 +0000 UTC" firstStartedPulling="2026-01-31 16:44:42.181456558 +0000 UTC m=+868.987513474" lastFinishedPulling="2026-01-31 16:44:49.618732394 +0000 UTC m=+876.424789310" observedRunningTime="2026-01-31 16:44:54.801972295 +0000 UTC m=+881.608029211" watchObservedRunningTime="2026-01-31 16:44:54.806449196 +0000 UTC m=+881.612506112" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.864736 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b6676c8-c57e-4081-b77c-47e5a534abb0-utilities\") pod \"certified-operators-wnc5j\" (UID: \"8b6676c8-c57e-4081-b77c-47e5a534abb0\") " pod="openshift-marketplace/certified-operators-wnc5j" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.864783 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz7kk\" (UniqueName: \"kubernetes.io/projected/8b6676c8-c57e-4081-b77c-47e5a534abb0-kube-api-access-vz7kk\") pod \"certified-operators-wnc5j\" (UID: \"8b6676c8-c57e-4081-b77c-47e5a534abb0\") " pod="openshift-marketplace/certified-operators-wnc5j" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.864827 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b6676c8-c57e-4081-b77c-47e5a534abb0-catalog-content\") pod \"certified-operators-wnc5j\" (UID: \"8b6676c8-c57e-4081-b77c-47e5a534abb0\") " pod="openshift-marketplace/certified-operators-wnc5j" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.891925 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=22.046081466 podStartE2EDuration="29.891904852s" podCreationTimestamp="2026-01-31 16:44:25 +0000 UTC" firstStartedPulling="2026-01-31 16:44:41.779253415 +0000 UTC m=+868.585310331" lastFinishedPulling="2026-01-31 16:44:49.625076801 +0000 UTC m=+876.431133717" observedRunningTime="2026-01-31 16:44:54.87817959 +0000 UTC m=+881.684236506" watchObservedRunningTime="2026-01-31 16:44:54.891904852 +0000 UTC m=+881.697961768" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.957525 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-66hvq"] Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.966619 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b6676c8-c57e-4081-b77c-47e5a534abb0-utilities\") pod \"certified-operators-wnc5j\" (UID: \"8b6676c8-c57e-4081-b77c-47e5a534abb0\") " pod="openshift-marketplace/certified-operators-wnc5j" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.966707 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz7kk\" (UniqueName: \"kubernetes.io/projected/8b6676c8-c57e-4081-b77c-47e5a534abb0-kube-api-access-vz7kk\") pod \"certified-operators-wnc5j\" (UID: \"8b6676c8-c57e-4081-b77c-47e5a534abb0\") " pod="openshift-marketplace/certified-operators-wnc5j" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.967037 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b6676c8-c57e-4081-b77c-47e5a534abb0-catalog-content\") pod \"certified-operators-wnc5j\" (UID: \"8b6676c8-c57e-4081-b77c-47e5a534abb0\") " pod="openshift-marketplace/certified-operators-wnc5j" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.967351 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b6676c8-c57e-4081-b77c-47e5a534abb0-catalog-content\") pod \"certified-operators-wnc5j\" (UID: \"8b6676c8-c57e-4081-b77c-47e5a534abb0\") " pod="openshift-marketplace/certified-operators-wnc5j" Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.967106 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b6676c8-c57e-4081-b77c-47e5a534abb0-utilities\") pod \"certified-operators-wnc5j\" (UID: \"8b6676c8-c57e-4081-b77c-47e5a534abb0\") " pod="openshift-marketplace/certified-operators-wnc5j" Jan 31 16:44:54 crc kubenswrapper[4730]: W0131 16:44:54.970221 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda81eb20f_04f9_4f66_b19a_19cd06c28329.slice/crio-52b383197bf3e164b055b20a5e6f23bc14c2950863894e6a9cf3577715d1a12c WatchSource:0}: Error finding container 52b383197bf3e164b055b20a5e6f23bc14c2950863894e6a9cf3577715d1a12c: Status 404 returned error can't find the container with id 52b383197bf3e164b055b20a5e6f23bc14c2950863894e6a9cf3577715d1a12c Jan 31 16:44:54 crc kubenswrapper[4730]: I0131 16:44:54.985470 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz7kk\" (UniqueName: \"kubernetes.io/projected/8b6676c8-c57e-4081-b77c-47e5a534abb0-kube-api-access-vz7kk\") pod \"certified-operators-wnc5j\" (UID: \"8b6676c8-c57e-4081-b77c-47e5a534abb0\") " pod="openshift-marketplace/certified-operators-wnc5j" Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.023128 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.069456 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.126253 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wnc5j" Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.359811 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.473984 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.679697 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.679745 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.698811 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wnc5j"] Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.791095 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" event={"ID":"c6c05f77-50d7-4933-aca0-45a255bbd253","Type":"ContainerStarted","Data":"1d7d78a01922141d4d8a8dc677554e08baf2490351707b00eeeb0ad2744625cc"} Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.791318 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.793729 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" event={"ID":"98fb2db2-b72a-4c8a-94ba-08e1567ba221","Type":"ContainerStarted","Data":"6c5a062e2e0f80872e51a7c7ee0b3f04d6d56a47b02d6eb5909c186015864359"} Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.794026 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.795044 4730 generic.go:334] "Generic (PLEG): container finished" podID="a81eb20f-04f9-4f66-b19a-19cd06c28329" containerID="e9e82a70cdbcbbab3aaa14c3bfef2ac97b4fcdcd8d1169621154defaa05eed7f" exitCode=0 Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.795097 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hvq" event={"ID":"a81eb20f-04f9-4f66-b19a-19cd06c28329","Type":"ContainerDied","Data":"e9e82a70cdbcbbab3aaa14c3bfef2ac97b4fcdcd8d1169621154defaa05eed7f"} Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.795116 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hvq" event={"ID":"a81eb20f-04f9-4f66-b19a-19cd06c28329","Type":"ContainerStarted","Data":"52b383197bf3e164b055b20a5e6f23bc14c2950863894e6a9cf3577715d1a12c"} Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.797173 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnc5j" event={"ID":"8b6676c8-c57e-4081-b77c-47e5a534abb0","Type":"ContainerStarted","Data":"37cc6df0683a5eafe4b5274055be34721cbe0cb0bcb23423eabfa64498a63770"} Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.800434 4730 generic.go:334] "Generic (PLEG): container finished" podID="c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9" containerID="a6494b5417e344d46740e3882590bc379665a20495f2ff07307610a4c7f3354c" exitCode=0 Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.800900 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2g75" event={"ID":"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9","Type":"ContainerDied","Data":"a6494b5417e344d46740e3882590bc379665a20495f2ff07307610a4c7f3354c"} Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.800940 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2g75" event={"ID":"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9","Type":"ContainerStarted","Data":"2210f647d6cb9c4adcd8cc8f4de0212c32905733f5e201a15bbcdfb6cd82548b"} Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.801669 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.830807 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" podStartSLOduration=-9223372003.023994 podStartE2EDuration="33.83078205s" podCreationTimestamp="2026-01-31 16:44:22 +0000 UTC" firstStartedPulling="2026-01-31 16:44:23.557348079 +0000 UTC m=+850.363404995" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:44:55.828314469 +0000 UTC m=+882.634371375" watchObservedRunningTime="2026-01-31 16:44:55.83078205 +0000 UTC m=+882.636838966" Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.866208 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" podStartSLOduration=2.832077758 podStartE2EDuration="32.866190971s" podCreationTimestamp="2026-01-31 16:44:23 +0000 UTC" firstStartedPulling="2026-01-31 16:44:23.861784949 +0000 UTC m=+850.667841865" lastFinishedPulling="2026-01-31 16:44:53.895898162 +0000 UTC m=+880.701955078" observedRunningTime="2026-01-31 16:44:55.864331035 +0000 UTC m=+882.670387951" watchObservedRunningTime="2026-01-31 16:44:55.866190971 +0000 UTC m=+882.672247887" Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.880591 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.911868 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r2g75" podStartSLOduration=4.422576511 podStartE2EDuration="5.911854957s" podCreationTimestamp="2026-01-31 16:44:50 +0000 UTC" firstStartedPulling="2026-01-31 16:44:53.713131857 +0000 UTC m=+880.519188773" lastFinishedPulling="2026-01-31 16:44:55.202410303 +0000 UTC m=+882.008467219" observedRunningTime="2026-01-31 16:44:55.90795743 +0000 UTC m=+882.714014346" watchObservedRunningTime="2026-01-31 16:44:55.911854957 +0000 UTC m=+882.717911873" Jan 31 16:44:55 crc kubenswrapper[4730]: I0131 16:44:55.953468 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.216201 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8ws2c"] Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.280841 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vrcjk"] Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.282826 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.292153 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.308705 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vrcjk"] Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.399225 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-sw5kq"] Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.415556 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlr4v\" (UniqueName: \"kubernetes.io/projected/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-kube-api-access-mlr4v\") pod \"dnsmasq-dns-7fd796d7df-vrcjk\" (UID: \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\") " pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.415720 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-vrcjk\" (UID: \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\") " pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.415941 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.415984 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-vrcjk\" (UID: \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\") " pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.416059 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-config\") pod \"dnsmasq-dns-7fd796d7df-vrcjk\" (UID: \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\") " pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.421291 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.422992 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-sw5kq"] Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.509546 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5mhxq"] Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.520634 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-vrcjk\" (UID: \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\") " pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.520722 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7445317c-77cd-4b07-b3d9-17f5d07f247d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.520759 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7445317c-77cd-4b07-b3d9-17f5d07f247d-combined-ca-bundle\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.520785 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-vrcjk\" (UID: \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\") " pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.520821 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7445317c-77cd-4b07-b3d9-17f5d07f247d-ovn-rundir\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.520847 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd925\" (UniqueName: \"kubernetes.io/projected/7445317c-77cd-4b07-b3d9-17f5d07f247d-kube-api-access-jd925\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.520871 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-config\") pod \"dnsmasq-dns-7fd796d7df-vrcjk\" (UID: \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\") " pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.520905 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7445317c-77cd-4b07-b3d9-17f5d07f247d-ovs-rundir\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.520939 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlr4v\" (UniqueName: \"kubernetes.io/projected/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-kube-api-access-mlr4v\") pod \"dnsmasq-dns-7fd796d7df-vrcjk\" (UID: \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\") " pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.520959 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7445317c-77cd-4b07-b3d9-17f5d07f247d-config\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.521868 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-vrcjk\" (UID: \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\") " pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.522626 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-config\") pod \"dnsmasq-dns-7fd796d7df-vrcjk\" (UID: \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\") " pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.522921 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-vrcjk\" (UID: \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\") " pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.568985 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9tzhb"] Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.581498 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlr4v\" (UniqueName: \"kubernetes.io/projected/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-kube-api-access-mlr4v\") pod \"dnsmasq-dns-7fd796d7df-vrcjk\" (UID: \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\") " pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.627696 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.630223 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7445317c-77cd-4b07-b3d9-17f5d07f247d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.637317 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7445317c-77cd-4b07-b3d9-17f5d07f247d-combined-ca-bundle\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.637427 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7445317c-77cd-4b07-b3d9-17f5d07f247d-ovn-rundir\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.637515 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd925\" (UniqueName: \"kubernetes.io/projected/7445317c-77cd-4b07-b3d9-17f5d07f247d-kube-api-access-jd925\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.651759 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7445317c-77cd-4b07-b3d9-17f5d07f247d-ovn-rundir\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.641780 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.672463 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7445317c-77cd-4b07-b3d9-17f5d07f247d-combined-ca-bundle\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.673562 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7445317c-77cd-4b07-b3d9-17f5d07f247d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.669497 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7445317c-77cd-4b07-b3d9-17f5d07f247d-ovs-rundir\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.683828 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9tzhb"] Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.686942 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7445317c-77cd-4b07-b3d9-17f5d07f247d-ovs-rundir\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.704371 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.705421 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7445317c-77cd-4b07-b3d9-17f5d07f247d-config\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.706558 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7445317c-77cd-4b07-b3d9-17f5d07f247d-config\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.717603 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd925\" (UniqueName: \"kubernetes.io/projected/7445317c-77cd-4b07-b3d9-17f5d07f247d-kube-api-access-jd925\") pod \"ovn-controller-metrics-sw5kq\" (UID: \"7445317c-77cd-4b07-b3d9-17f5d07f247d\") " pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.731825 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.733628 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.743942 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.744466 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-sw5kq" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.748353 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.749002 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-cpjcb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.749487 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.784267 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.808433 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qm4d\" (UniqueName: \"kubernetes.io/projected/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-kube-api-access-9qm4d\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.808470 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-config\") pod \"dnsmasq-dns-86db49b7ff-9tzhb\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.808503 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-config\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.808525 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8796q\" (UniqueName: \"kubernetes.io/projected/738cd861-a897-43d9-b336-cbb6afca4e96-kube-api-access-8796q\") pod \"dnsmasq-dns-86db49b7ff-9tzhb\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.808548 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-scripts\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.808578 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.808601 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.808622 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.808646 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-9tzhb\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.808665 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.808696 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-9tzhb\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.808717 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-9tzhb\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.842289 4730 generic.go:334] "Generic (PLEG): container finished" podID="8b6676c8-c57e-4081-b77c-47e5a534abb0" containerID="9d57277fd097cd390c20c1c672607a13550085040ca00b1a964f9434cefce34b" exitCode=0 Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.843403 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnc5j" event={"ID":"8b6676c8-c57e-4081-b77c-47e5a534abb0","Type":"ContainerDied","Data":"9d57277fd097cd390c20c1c672607a13550085040ca00b1a964f9434cefce34b"} Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.910457 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8796q\" (UniqueName: \"kubernetes.io/projected/738cd861-a897-43d9-b336-cbb6afca4e96-kube-api-access-8796q\") pod \"dnsmasq-dns-86db49b7ff-9tzhb\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.910763 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-scripts\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.910818 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.910847 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.910904 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.910927 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-9tzhb\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.910951 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.910984 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-9tzhb\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.911006 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-9tzhb\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.911028 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-config\") pod \"dnsmasq-dns-86db49b7ff-9tzhb\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.911044 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qm4d\" (UniqueName: \"kubernetes.io/projected/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-kube-api-access-9qm4d\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.911076 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-config\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.911865 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-config\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.915531 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-9tzhb\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.916305 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-scripts\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.916544 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-9tzhb\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.916645 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-config\") pod \"dnsmasq-dns-86db49b7ff-9tzhb\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.917560 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.918151 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-9tzhb\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.941081 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.941538 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.942081 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qm4d\" (UniqueName: \"kubernetes.io/projected/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-kube-api-access-9qm4d\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.949623 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a5af028-91b9-4bfa-a3b9-efa454ff8d31-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6a5af028-91b9-4bfa-a3b9-efa454ff8d31\") " pod="openstack/ovn-northd-0" Jan 31 16:44:56 crc kubenswrapper[4730]: I0131 16:44:56.959474 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8796q\" (UniqueName: \"kubernetes.io/projected/738cd861-a897-43d9-b336-cbb6afca4e96-kube-api-access-8796q\") pod \"dnsmasq-dns-86db49b7ff-9tzhb\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:44:57 crc kubenswrapper[4730]: I0131 16:44:57.128577 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:44:57 crc kubenswrapper[4730]: I0131 16:44:57.157899 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:57 crc kubenswrapper[4730]: I0131 16:44:57.157954 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 31 16:44:57 crc kubenswrapper[4730]: I0131 16:44:57.158549 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 16:44:57 crc kubenswrapper[4730]: I0131 16:44:57.311910 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 31 16:44:57 crc kubenswrapper[4730]: I0131 16:44:57.504745 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-sw5kq"] Jan 31 16:44:57 crc kubenswrapper[4730]: I0131 16:44:57.627733 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vrcjk"] Jan 31 16:44:57 crc kubenswrapper[4730]: I0131 16:44:57.852837 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 16:44:57 crc kubenswrapper[4730]: I0131 16:44:57.891107 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" event={"ID":"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d","Type":"ContainerStarted","Data":"c12595c5ad383939ed02f2ffc132d3b915e37ca41d676cc7793b67fc2084dd1d"} Jan 31 16:44:57 crc kubenswrapper[4730]: I0131 16:44:57.893763 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-sw5kq" event={"ID":"7445317c-77cd-4b07-b3d9-17f5d07f247d","Type":"ContainerStarted","Data":"cc64ab5a9263c084dbd6a190aca657fbb26c8812df8b55f8dd601da49b185122"} Jan 31 16:44:57 crc kubenswrapper[4730]: I0131 16:44:57.893960 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" podUID="c6c05f77-50d7-4933-aca0-45a255bbd253" containerName="dnsmasq-dns" containerID="cri-o://1d7d78a01922141d4d8a8dc677554e08baf2490351707b00eeeb0ad2744625cc" gracePeriod=10 Jan 31 16:44:57 crc kubenswrapper[4730]: I0131 16:44:57.894579 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" podUID="98fb2db2-b72a-4c8a-94ba-08e1567ba221" containerName="dnsmasq-dns" containerID="cri-o://6c5a062e2e0f80872e51a7c7ee0b3f04d6d56a47b02d6eb5909c186015864359" gracePeriod=10 Jan 31 16:44:57 crc kubenswrapper[4730]: I0131 16:44:57.959071 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9tzhb"] Jan 31 16:44:58 crc kubenswrapper[4730]: I0131 16:44:58.899717 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6a5af028-91b9-4bfa-a3b9-efa454ff8d31","Type":"ContainerStarted","Data":"f385753ebcf2e2790c9848a17f2756d75c8fb7133e48ba2f597cf0404008d542"} Jan 31 16:44:58 crc kubenswrapper[4730]: I0131 16:44:58.901061 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" event={"ID":"738cd861-a897-43d9-b336-cbb6afca4e96","Type":"ContainerStarted","Data":"70810ef69032ac57d86b056f23d60ede4650d89c2c133005a14a56b41cf4c2ac"} Jan 31 16:44:58 crc kubenswrapper[4730]: I0131 16:44:58.902709 4730 generic.go:334] "Generic (PLEG): container finished" podID="c6c05f77-50d7-4933-aca0-45a255bbd253" containerID="1d7d78a01922141d4d8a8dc677554e08baf2490351707b00eeeb0ad2744625cc" exitCode=0 Jan 31 16:44:58 crc kubenswrapper[4730]: I0131 16:44:58.902751 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" event={"ID":"c6c05f77-50d7-4933-aca0-45a255bbd253","Type":"ContainerDied","Data":"1d7d78a01922141d4d8a8dc677554e08baf2490351707b00eeeb0ad2744625cc"} Jan 31 16:44:58 crc kubenswrapper[4730]: I0131 16:44:58.906147 4730 generic.go:334] "Generic (PLEG): container finished" podID="98fb2db2-b72a-4c8a-94ba-08e1567ba221" containerID="6c5a062e2e0f80872e51a7c7ee0b3f04d6d56a47b02d6eb5909c186015864359" exitCode=0 Jan 31 16:44:58 crc kubenswrapper[4730]: I0131 16:44:58.906216 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" event={"ID":"98fb2db2-b72a-4c8a-94ba-08e1567ba221","Type":"ContainerDied","Data":"6c5a062e2e0f80872e51a7c7ee0b3f04d6d56a47b02d6eb5909c186015864359"} Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.251755 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.470760 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vrcjk"] Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.504403 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-7w5f2"] Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.506079 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.525063 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7w5f2"] Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.605588 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7w5f2\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.605780 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-config\") pod \"dnsmasq-dns-698758b865-7w5f2\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.605890 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-dns-svc\") pod \"dnsmasq-dns-698758b865-7w5f2\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.606056 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cldt\" (UniqueName: \"kubernetes.io/projected/24ce46a6-467c-4c82-9f68-900abb2601e1-kube-api-access-5cldt\") pod \"dnsmasq-dns-698758b865-7w5f2\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.606165 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7w5f2\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.708370 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-config\") pod \"dnsmasq-dns-698758b865-7w5f2\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.708715 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-dns-svc\") pod \"dnsmasq-dns-698758b865-7w5f2\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.708750 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cldt\" (UniqueName: \"kubernetes.io/projected/24ce46a6-467c-4c82-9f68-900abb2601e1-kube-api-access-5cldt\") pod \"dnsmasq-dns-698758b865-7w5f2\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.708777 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7w5f2\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.708828 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7w5f2\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.709747 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7w5f2\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.710204 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7w5f2\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.710539 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-dns-svc\") pod \"dnsmasq-dns-698758b865-7w5f2\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.710815 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-config\") pod \"dnsmasq-dns-698758b865-7w5f2\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.733522 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cldt\" (UniqueName: \"kubernetes.io/projected/24ce46a6-467c-4c82-9f68-900abb2601e1-kube-api-access-5cldt\") pod \"dnsmasq-dns-698758b865-7w5f2\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.829181 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.914660 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-sw5kq" event={"ID":"7445317c-77cd-4b07-b3d9-17f5d07f247d","Type":"ContainerStarted","Data":"5eedc4d58b8ed01e65bab2923af6e5a0a93e0f7e403ea681e363bcf29a72ffce"} Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.927069 4730 generic.go:334] "Generic (PLEG): container finished" podID="4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d" containerID="8bc980d94abdc5f8deca524e19a8cd9bd9cbd102966bfba3332de8612ce28c9a" exitCode=0 Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.927142 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" event={"ID":"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d","Type":"ContainerDied","Data":"8bc980d94abdc5f8deca524e19a8cd9bd9cbd102966bfba3332de8612ce28c9a"} Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.943131 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-sw5kq" podStartSLOduration=3.9431143459999998 podStartE2EDuration="3.943114346s" podCreationTimestamp="2026-01-31 16:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:44:59.942080791 +0000 UTC m=+886.748137707" watchObservedRunningTime="2026-01-31 16:44:59.943114346 +0000 UTC m=+886.749171262" Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.945297 4730 generic.go:334] "Generic (PLEG): container finished" podID="738cd861-a897-43d9-b336-cbb6afca4e96" containerID="94609f806e4fddcc74454716985348ee449def22a0a71eb0a4fa170afbca4e00" exitCode=0 Jan 31 16:44:59 crc kubenswrapper[4730]: I0131 16:44:59.945351 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" event={"ID":"738cd861-a897-43d9-b336-cbb6afca4e96","Type":"ContainerDied","Data":"94609f806e4fddcc74454716985348ee449def22a0a71eb0a4fa170afbca4e00"} Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.055683 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.124205 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh7gw\" (UniqueName: \"kubernetes.io/projected/98fb2db2-b72a-4c8a-94ba-08e1567ba221-kube-api-access-xh7gw\") pod \"98fb2db2-b72a-4c8a-94ba-08e1567ba221\" (UID: \"98fb2db2-b72a-4c8a-94ba-08e1567ba221\") " Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.124353 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98fb2db2-b72a-4c8a-94ba-08e1567ba221-dns-svc\") pod \"98fb2db2-b72a-4c8a-94ba-08e1567ba221\" (UID: \"98fb2db2-b72a-4c8a-94ba-08e1567ba221\") " Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.124537 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98fb2db2-b72a-4c8a-94ba-08e1567ba221-config\") pod \"98fb2db2-b72a-4c8a-94ba-08e1567ba221\" (UID: \"98fb2db2-b72a-4c8a-94ba-08e1567ba221\") " Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.133153 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98fb2db2-b72a-4c8a-94ba-08e1567ba221-kube-api-access-xh7gw" (OuterVolumeSpecName: "kube-api-access-xh7gw") pod "98fb2db2-b72a-4c8a-94ba-08e1567ba221" (UID: "98fb2db2-b72a-4c8a-94ba-08e1567ba221"). InnerVolumeSpecName "kube-api-access-xh7gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.177593 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98fb2db2-b72a-4c8a-94ba-08e1567ba221-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "98fb2db2-b72a-4c8a-94ba-08e1567ba221" (UID: "98fb2db2-b72a-4c8a-94ba-08e1567ba221"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.229935 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh7gw\" (UniqueName: \"kubernetes.io/projected/98fb2db2-b72a-4c8a-94ba-08e1567ba221-kube-api-access-xh7gw\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.229957 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98fb2db2-b72a-4c8a-94ba-08e1567ba221-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.238258 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98fb2db2-b72a-4c8a-94ba-08e1567ba221-config" (OuterVolumeSpecName: "config") pod "98fb2db2-b72a-4c8a-94ba-08e1567ba221" (UID: "98fb2db2-b72a-4c8a-94ba-08e1567ba221"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.267067 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd"] Jan 31 16:45:00 crc kubenswrapper[4730]: E0131 16:45:00.267630 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98fb2db2-b72a-4c8a-94ba-08e1567ba221" containerName="dnsmasq-dns" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.267710 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="98fb2db2-b72a-4c8a-94ba-08e1567ba221" containerName="dnsmasq-dns" Jan 31 16:45:00 crc kubenswrapper[4730]: E0131 16:45:00.267784 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98fb2db2-b72a-4c8a-94ba-08e1567ba221" containerName="init" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.267851 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="98fb2db2-b72a-4c8a-94ba-08e1567ba221" containerName="init" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.268076 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="98fb2db2-b72a-4c8a-94ba-08e1567ba221" containerName="dnsmasq-dns" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.268615 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.273856 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.283055 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.296625 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.298953 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd"] Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.331167 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8209b289-3057-4a18-901a-5faa51042bc0-config-volume\") pod \"collect-profiles-29497965-497zd\" (UID: \"8209b289-3057-4a18-901a-5faa51042bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.331212 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8209b289-3057-4a18-901a-5faa51042bc0-secret-volume\") pod \"collect-profiles-29497965-497zd\" (UID: \"8209b289-3057-4a18-901a-5faa51042bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.331236 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm9pg\" (UniqueName: \"kubernetes.io/projected/8209b289-3057-4a18-901a-5faa51042bc0-kube-api-access-sm9pg\") pod \"collect-profiles-29497965-497zd\" (UID: \"8209b289-3057-4a18-901a-5faa51042bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.331279 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98fb2db2-b72a-4c8a-94ba-08e1567ba221-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.432400 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6c05f77-50d7-4933-aca0-45a255bbd253-dns-svc\") pod \"c6c05f77-50d7-4933-aca0-45a255bbd253\" (UID: \"c6c05f77-50d7-4933-aca0-45a255bbd253\") " Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.432481 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxd6g\" (UniqueName: \"kubernetes.io/projected/c6c05f77-50d7-4933-aca0-45a255bbd253-kube-api-access-cxd6g\") pod \"c6c05f77-50d7-4933-aca0-45a255bbd253\" (UID: \"c6c05f77-50d7-4933-aca0-45a255bbd253\") " Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.432643 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c05f77-50d7-4933-aca0-45a255bbd253-config\") pod \"c6c05f77-50d7-4933-aca0-45a255bbd253\" (UID: \"c6c05f77-50d7-4933-aca0-45a255bbd253\") " Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.432843 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8209b289-3057-4a18-901a-5faa51042bc0-config-volume\") pod \"collect-profiles-29497965-497zd\" (UID: \"8209b289-3057-4a18-901a-5faa51042bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.432865 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8209b289-3057-4a18-901a-5faa51042bc0-secret-volume\") pod \"collect-profiles-29497965-497zd\" (UID: \"8209b289-3057-4a18-901a-5faa51042bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.432891 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm9pg\" (UniqueName: \"kubernetes.io/projected/8209b289-3057-4a18-901a-5faa51042bc0-kube-api-access-sm9pg\") pod \"collect-profiles-29497965-497zd\" (UID: \"8209b289-3057-4a18-901a-5faa51042bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.443008 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8209b289-3057-4a18-901a-5faa51042bc0-config-volume\") pod \"collect-profiles-29497965-497zd\" (UID: \"8209b289-3057-4a18-901a-5faa51042bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.452110 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8209b289-3057-4a18-901a-5faa51042bc0-secret-volume\") pod \"collect-profiles-29497965-497zd\" (UID: \"8209b289-3057-4a18-901a-5faa51042bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.454093 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c05f77-50d7-4933-aca0-45a255bbd253-kube-api-access-cxd6g" (OuterVolumeSpecName: "kube-api-access-cxd6g") pod "c6c05f77-50d7-4933-aca0-45a255bbd253" (UID: "c6c05f77-50d7-4933-aca0-45a255bbd253"). InnerVolumeSpecName "kube-api-access-cxd6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.460178 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm9pg\" (UniqueName: \"kubernetes.io/projected/8209b289-3057-4a18-901a-5faa51042bc0-kube-api-access-sm9pg\") pod \"collect-profiles-29497965-497zd\" (UID: \"8209b289-3057-4a18-901a-5faa51042bc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.517099 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c05f77-50d7-4933-aca0-45a255bbd253-config" (OuterVolumeSpecName: "config") pod "c6c05f77-50d7-4933-aca0-45a255bbd253" (UID: "c6c05f77-50d7-4933-aca0-45a255bbd253"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.547640 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c05f77-50d7-4933-aca0-45a255bbd253-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.547672 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxd6g\" (UniqueName: \"kubernetes.io/projected/c6c05f77-50d7-4933-aca0-45a255bbd253-kube-api-access-cxd6g\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.569869 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c05f77-50d7-4933-aca0-45a255bbd253-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c6c05f77-50d7-4933-aca0-45a255bbd253" (UID: "c6c05f77-50d7-4933-aca0-45a255bbd253"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.587316 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.649747 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6c05f77-50d7-4933-aca0-45a255bbd253-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.981923 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" event={"ID":"98fb2db2-b72a-4c8a-94ba-08e1567ba221","Type":"ContainerDied","Data":"3f32b065073d220ef58bf584ffc8856d5ffc64f14f44ceb965ab3d9397a51023"} Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.982258 4730 scope.go:117] "RemoveContainer" containerID="6c5a062e2e0f80872e51a7c7ee0b3f04d6d56a47b02d6eb5909c186015864359" Jan 31 16:45:00 crc kubenswrapper[4730]: I0131 16:45:00.982168 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8ws2c" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.033604 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" event={"ID":"c6c05f77-50d7-4933-aca0-45a255bbd253","Type":"ContainerDied","Data":"c18119f95b4a1096048dfe3f13b7497f9df26cfaf4e93bb1f9ad37fddd8d14b8"} Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.033666 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5mhxq" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.049924 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8ws2c"] Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.059668 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8ws2c"] Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.114114 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 31 16:45:01 crc kubenswrapper[4730]: E0131 16:45:01.116193 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c05f77-50d7-4933-aca0-45a255bbd253" containerName="dnsmasq-dns" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.116209 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c05f77-50d7-4933-aca0-45a255bbd253" containerName="dnsmasq-dns" Jan 31 16:45:01 crc kubenswrapper[4730]: E0131 16:45:01.116234 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c05f77-50d7-4933-aca0-45a255bbd253" containerName="init" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.116241 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c05f77-50d7-4933-aca0-45a255bbd253" containerName="init" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.116424 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c05f77-50d7-4933-aca0-45a255bbd253" containerName="dnsmasq-dns" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.120571 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.127088 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.127316 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-6qtkw" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.127422 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.130738 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.159311 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.188236 4730 scope.go:117] "RemoveContainer" containerID="41b6ce30ee1e3f95c70ee9a8eced57d9c5c6a1ae816cf4ac75f606a08e8baf3d" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.212304 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.253918 4730 scope.go:117] "RemoveContainer" containerID="1d7d78a01922141d4d8a8dc677554e08baf2490351707b00eeeb0ad2744625cc" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.254106 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5mhxq"] Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.266248 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-ovsdbserver-nb\") pod \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\" (UID: \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\") " Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.266292 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlr4v\" (UniqueName: \"kubernetes.io/projected/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-kube-api-access-mlr4v\") pod \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\" (UID: \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\") " Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.266331 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-dns-svc\") pod \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\" (UID: \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\") " Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.266361 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-config\") pod \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\" (UID: \"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d\") " Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.266621 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.266652 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5bds\" (UniqueName: \"kubernetes.io/projected/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-kube-api-access-k5bds\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.266693 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.266729 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-lock\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.266767 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-etc-swift\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.266788 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-cache\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.270387 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5mhxq"] Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.304515 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r2g75" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.304564 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r2g75" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.310333 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-kube-api-access-mlr4v" (OuterVolumeSpecName: "kube-api-access-mlr4v") pod "4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d" (UID: "4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d"). InnerVolumeSpecName "kube-api-access-mlr4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.319525 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d" (UID: "4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.329005 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7w5f2"] Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.335409 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d" (UID: "4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.349312 4730 scope.go:117] "RemoveContainer" containerID="07638b641e2aef0d0fc1e892383da86469effd9e7e4d941c7894b904a1287eae" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.360719 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-config" (OuterVolumeSpecName: "config") pod "4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d" (UID: "4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.369045 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.369114 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-lock\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.369156 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-etc-swift\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.369180 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-cache\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.369222 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.369246 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5bds\" (UniqueName: \"kubernetes.io/projected/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-kube-api-access-k5bds\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.369290 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.369301 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlr4v\" (UniqueName: \"kubernetes.io/projected/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-kube-api-access-mlr4v\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.369310 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.369318 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:01 crc kubenswrapper[4730]: E0131 16:45:01.370119 4730 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 16:45:01 crc kubenswrapper[4730]: E0131 16:45:01.370218 4730 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 16:45:01 crc kubenswrapper[4730]: E0131 16:45:01.370335 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-etc-swift podName:3656b8f0-e1d3-4214-9c23-dd437a57f2ad nodeName:}" failed. No retries permitted until 2026-01-31 16:45:01.870316109 +0000 UTC m=+888.676373025 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-etc-swift") pod "swift-storage-0" (UID: "3656b8f0-e1d3-4214-9c23-dd437a57f2ad") : configmap "swift-ring-files" not found Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.373324 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.383028 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-cache\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.384809 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-lock\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.396145 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r2g75" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.403364 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.409723 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5bds\" (UniqueName: \"kubernetes.io/projected/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-kube-api-access-k5bds\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.418942 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.591102 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd"] Jan 31 16:45:01 crc kubenswrapper[4730]: W0131 16:45:01.600244 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8209b289_3057_4a18_901a_5faa51042bc0.slice/crio-e0c2d8a3cf164d771198c760cc065f3d7b16c1a9448de43a0538563391686ab1 WatchSource:0}: Error finding container e0c2d8a3cf164d771198c760cc065f3d7b16c1a9448de43a0538563391686ab1: Status 404 returned error can't find the container with id e0c2d8a3cf164d771198c760cc065f3d7b16c1a9448de43a0538563391686ab1 Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.888700 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-etc-swift\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:01 crc kubenswrapper[4730]: E0131 16:45:01.888915 4730 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 16:45:01 crc kubenswrapper[4730]: E0131 16:45:01.888951 4730 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 16:45:01 crc kubenswrapper[4730]: E0131 16:45:01.889012 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-etc-swift podName:3656b8f0-e1d3-4214-9c23-dd437a57f2ad nodeName:}" failed. No retries permitted until 2026-01-31 16:45:02.888991588 +0000 UTC m=+889.695048504 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-etc-swift") pod "swift-storage-0" (UID: "3656b8f0-e1d3-4214-9c23-dd437a57f2ad") : configmap "swift-ring-files" not found Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.947423 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x6zhm"] Jan 31 16:45:01 crc kubenswrapper[4730]: E0131 16:45:01.948281 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d" containerName="init" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.948461 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d" containerName="init" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.948794 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d" containerName="init" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.950087 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x6zhm" Jan 31 16:45:01 crc kubenswrapper[4730]: I0131 16:45:01.967636 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6zhm"] Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.054438 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" event={"ID":"4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d","Type":"ContainerDied","Data":"c12595c5ad383939ed02f2ffc132d3b915e37ca41d676cc7793b67fc2084dd1d"} Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.054482 4730 scope.go:117] "RemoveContainer" containerID="8bc980d94abdc5f8deca524e19a8cd9bd9cbd102966bfba3332de8612ce28c9a" Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.054568 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vrcjk" Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.065407 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hvq" event={"ID":"a81eb20f-04f9-4f66-b19a-19cd06c28329","Type":"ContainerStarted","Data":"3473a981f3486e6e812449f116a0c531face37ef015ae7a5ccaed295b2740319"} Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.067310 4730 generic.go:334] "Generic (PLEG): container finished" podID="8b6676c8-c57e-4081-b77c-47e5a534abb0" containerID="8f8776cb29cf894555e4f1ac088162722f24cf8a875d547a5720e3f33d9e62e0" exitCode=0 Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.067366 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnc5j" event={"ID":"8b6676c8-c57e-4081-b77c-47e5a534abb0","Type":"ContainerDied","Data":"8f8776cb29cf894555e4f1ac088162722f24cf8a875d547a5720e3f33d9e62e0"} Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.083574 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7w5f2" event={"ID":"24ce46a6-467c-4c82-9f68-900abb2601e1","Type":"ContainerStarted","Data":"625233b49ca8e0677eb7065430535d91777f61177f12f12d64a9ed194843f04f"} Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.083609 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7w5f2" event={"ID":"24ce46a6-467c-4c82-9f68-900abb2601e1","Type":"ContainerStarted","Data":"49954cdf37bba7d47a6daec53b95edfec058a007239f63a9069c59409bf5621c"} Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.092193 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0d12ea3-22b7-4e96-9a8e-102e6473918c-utilities\") pod \"redhat-marketplace-x6zhm\" (UID: \"a0d12ea3-22b7-4e96-9a8e-102e6473918c\") " pod="openshift-marketplace/redhat-marketplace-x6zhm" Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.092257 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0d12ea3-22b7-4e96-9a8e-102e6473918c-catalog-content\") pod \"redhat-marketplace-x6zhm\" (UID: \"a0d12ea3-22b7-4e96-9a8e-102e6473918c\") " pod="openshift-marketplace/redhat-marketplace-x6zhm" Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.092331 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnxr4\" (UniqueName: \"kubernetes.io/projected/a0d12ea3-22b7-4e96-9a8e-102e6473918c-kube-api-access-mnxr4\") pod \"redhat-marketplace-x6zhm\" (UID: \"a0d12ea3-22b7-4e96-9a8e-102e6473918c\") " pod="openshift-marketplace/redhat-marketplace-x6zhm" Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.099504 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" event={"ID":"738cd861-a897-43d9-b336-cbb6afca4e96","Type":"ContainerStarted","Data":"ee6915991fcfc34555b471a5f43111111042fabaf775a3f96c89500dc2f0c7a2"} Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.100228 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.122025 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" event={"ID":"8209b289-3057-4a18-901a-5faa51042bc0","Type":"ContainerStarted","Data":"8956b15e2bb6c6d43c4b116c63b7c52bc49adc85a6ad9358b4065f82b99a8de5"} Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.122999 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" event={"ID":"8209b289-3057-4a18-901a-5faa51042bc0","Type":"ContainerStarted","Data":"e0c2d8a3cf164d771198c760cc065f3d7b16c1a9448de43a0538563391686ab1"} Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.160915 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" podStartSLOduration=6.16089485 podStartE2EDuration="6.16089485s" podCreationTimestamp="2026-01-31 16:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:45:02.152584943 +0000 UTC m=+888.958641879" watchObservedRunningTime="2026-01-31 16:45:02.16089485 +0000 UTC m=+888.966951766" Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.197317 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0d12ea3-22b7-4e96-9a8e-102e6473918c-utilities\") pod \"redhat-marketplace-x6zhm\" (UID: \"a0d12ea3-22b7-4e96-9a8e-102e6473918c\") " pod="openshift-marketplace/redhat-marketplace-x6zhm" Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.200159 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0d12ea3-22b7-4e96-9a8e-102e6473918c-catalog-content\") pod \"redhat-marketplace-x6zhm\" (UID: \"a0d12ea3-22b7-4e96-9a8e-102e6473918c\") " pod="openshift-marketplace/redhat-marketplace-x6zhm" Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.200500 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnxr4\" (UniqueName: \"kubernetes.io/projected/a0d12ea3-22b7-4e96-9a8e-102e6473918c-kube-api-access-mnxr4\") pod \"redhat-marketplace-x6zhm\" (UID: \"a0d12ea3-22b7-4e96-9a8e-102e6473918c\") " pod="openshift-marketplace/redhat-marketplace-x6zhm" Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.200539 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0d12ea3-22b7-4e96-9a8e-102e6473918c-catalog-content\") pod \"redhat-marketplace-x6zhm\" (UID: \"a0d12ea3-22b7-4e96-9a8e-102e6473918c\") " pod="openshift-marketplace/redhat-marketplace-x6zhm" Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.200070 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0d12ea3-22b7-4e96-9a8e-102e6473918c-utilities\") pod \"redhat-marketplace-x6zhm\" (UID: \"a0d12ea3-22b7-4e96-9a8e-102e6473918c\") " pod="openshift-marketplace/redhat-marketplace-x6zhm" Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.231791 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r2g75" Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.235072 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vrcjk"] Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.241720 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vrcjk"] Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.241734 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnxr4\" (UniqueName: \"kubernetes.io/projected/a0d12ea3-22b7-4e96-9a8e-102e6473918c-kube-api-access-mnxr4\") pod \"redhat-marketplace-x6zhm\" (UID: \"a0d12ea3-22b7-4e96-9a8e-102e6473918c\") " pod="openshift-marketplace/redhat-marketplace-x6zhm" Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.268109 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x6zhm" Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.474436 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d" path="/var/lib/kubelet/pods/4f9ffde3-96fc-4fbd-b165-9f54f7b6b03d/volumes" Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.475274 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98fb2db2-b72a-4c8a-94ba-08e1567ba221" path="/var/lib/kubelet/pods/98fb2db2-b72a-4c8a-94ba-08e1567ba221/volumes" Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.475963 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c05f77-50d7-4933-aca0-45a255bbd253" path="/var/lib/kubelet/pods/c6c05f77-50d7-4933-aca0-45a255bbd253/volumes" Jan 31 16:45:02 crc kubenswrapper[4730]: E0131 16:45:02.792658 4730 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.64:56438->38.102.83.64:44915: read tcp 38.102.83.64:56438->38.102.83.64:44915: read: connection reset by peer Jan 31 16:45:02 crc kubenswrapper[4730]: I0131 16:45:02.926652 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-etc-swift\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:02 crc kubenswrapper[4730]: E0131 16:45:02.927040 4730 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 16:45:02 crc kubenswrapper[4730]: E0131 16:45:02.927058 4730 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 16:45:02 crc kubenswrapper[4730]: E0131 16:45:02.927102 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-etc-swift podName:3656b8f0-e1d3-4214-9c23-dd437a57f2ad nodeName:}" failed. No retries permitted until 2026-01-31 16:45:04.927087294 +0000 UTC m=+891.733144210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-etc-swift") pod "swift-storage-0" (UID: "3656b8f0-e1d3-4214-9c23-dd437a57f2ad") : configmap "swift-ring-files" not found Jan 31 16:45:03 crc kubenswrapper[4730]: I0131 16:45:03.169259 4730 generic.go:334] "Generic (PLEG): container finished" podID="24ce46a6-467c-4c82-9f68-900abb2601e1" containerID="625233b49ca8e0677eb7065430535d91777f61177f12f12d64a9ed194843f04f" exitCode=0 Jan 31 16:45:03 crc kubenswrapper[4730]: I0131 16:45:03.169578 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7w5f2" event={"ID":"24ce46a6-467c-4c82-9f68-900abb2601e1","Type":"ContainerDied","Data":"625233b49ca8e0677eb7065430535d91777f61177f12f12d64a9ed194843f04f"} Jan 31 16:45:03 crc kubenswrapper[4730]: I0131 16:45:03.243219 4730 generic.go:334] "Generic (PLEG): container finished" podID="a81eb20f-04f9-4f66-b19a-19cd06c28329" containerID="3473a981f3486e6e812449f116a0c531face37ef015ae7a5ccaed295b2740319" exitCode=0 Jan 31 16:45:03 crc kubenswrapper[4730]: I0131 16:45:03.244720 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hvq" event={"ID":"a81eb20f-04f9-4f66-b19a-19cd06c28329","Type":"ContainerDied","Data":"3473a981f3486e6e812449f116a0c531face37ef015ae7a5ccaed295b2740319"} Jan 31 16:45:03 crc kubenswrapper[4730]: I0131 16:45:03.292350 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6zhm"] Jan 31 16:45:03 crc kubenswrapper[4730]: I0131 16:45:03.328363 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" podStartSLOduration=3.3283458120000002 podStartE2EDuration="3.328345812s" podCreationTimestamp="2026-01-31 16:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:45:03.319557784 +0000 UTC m=+890.125614700" watchObservedRunningTime="2026-01-31 16:45:03.328345812 +0000 UTC m=+890.134402728" Jan 31 16:45:03 crc kubenswrapper[4730]: I0131 16:45:03.739836 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r2g75"] Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.252425 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hvq" event={"ID":"a81eb20f-04f9-4f66-b19a-19cd06c28329","Type":"ContainerStarted","Data":"41a686f2464e22e3ad094b3ba86d11e87ed5255556ef8f43e2a7d3e8a3082d2f"} Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.253836 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6a5af028-91b9-4bfa-a3b9-efa454ff8d31","Type":"ContainerStarted","Data":"2840b0d9d6cb4774c75ada82bc788080bb543d31be7beeddaa268ccd1760beaf"} Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.253856 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6a5af028-91b9-4bfa-a3b9-efa454ff8d31","Type":"ContainerStarted","Data":"407854b9ac849f51dae0c194a94aa26c697310054b340084fd1484c8c5bba11b"} Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.253961 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.255853 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnc5j" event={"ID":"8b6676c8-c57e-4081-b77c-47e5a534abb0","Type":"ContainerStarted","Data":"7fe502529bbad1a3216beae3c6a7646a562e27dcdff23d620d84ff4c8bc9e12c"} Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.257382 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6zhm" event={"ID":"a0d12ea3-22b7-4e96-9a8e-102e6473918c","Type":"ContainerStarted","Data":"946ef030de80241a3391fd0525a4951ccfed3c6479f50e4a1fc39f20f7c3f269"} Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.257408 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6zhm" event={"ID":"a0d12ea3-22b7-4e96-9a8e-102e6473918c","Type":"ContainerStarted","Data":"fda5a09476ce3c4014808dd193ebad3422ec3702f067385cdbbbfd847a19afc6"} Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.259010 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7w5f2" event={"ID":"24ce46a6-467c-4c82-9f68-900abb2601e1","Type":"ContainerStarted","Data":"6261f1eb4a5de0d08c20c1d2d6ba279f9b66d002c903f34d066f4ece82535d1a"} Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.259194 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r2g75" podUID="c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9" containerName="registry-server" containerID="cri-o://2210f647d6cb9c4adcd8cc8f4de0212c32905733f5e201a15bbcdfb6cd82548b" gracePeriod=2 Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.276544 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-66hvq" podStartSLOduration=2.393429623 podStartE2EDuration="10.276521683s" podCreationTimestamp="2026-01-31 16:44:54 +0000 UTC" firstStartedPulling="2026-01-31 16:44:55.797042001 +0000 UTC m=+882.603098917" lastFinishedPulling="2026-01-31 16:45:03.680134061 +0000 UTC m=+890.486190977" observedRunningTime="2026-01-31 16:45:04.271760894 +0000 UTC m=+891.077817820" watchObservedRunningTime="2026-01-31 16:45:04.276521683 +0000 UTC m=+891.082578599" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.290929 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.297918713 podStartE2EDuration="8.29090972s" podCreationTimestamp="2026-01-31 16:44:56 +0000 UTC" firstStartedPulling="2026-01-31 16:44:57.891930119 +0000 UTC m=+884.697987035" lastFinishedPulling="2026-01-31 16:45:02.884921126 +0000 UTC m=+889.690978042" observedRunningTime="2026-01-31 16:45:04.288179703 +0000 UTC m=+891.094236619" watchObservedRunningTime="2026-01-31 16:45:04.29090972 +0000 UTC m=+891.096966646" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.309499 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wnc5j" podStartSLOduration=4.267003975 podStartE2EDuration="10.309480462s" podCreationTimestamp="2026-01-31 16:44:54 +0000 UTC" firstStartedPulling="2026-01-31 16:44:56.847995487 +0000 UTC m=+883.654052403" lastFinishedPulling="2026-01-31 16:45:02.890471974 +0000 UTC m=+889.696528890" observedRunningTime="2026-01-31 16:45:04.305409001 +0000 UTC m=+891.111465917" watchObservedRunningTime="2026-01-31 16:45:04.309480462 +0000 UTC m=+891.115537378" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.360154 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-7w5f2" podStartSLOduration=5.360142822 podStartE2EDuration="5.360142822s" podCreationTimestamp="2026-01-31 16:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:45:04.338497334 +0000 UTC m=+891.144554250" watchObservedRunningTime="2026-01-31 16:45:04.360142822 +0000 UTC m=+891.166199738" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.488288 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-66hvq" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.488341 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-66hvq" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.747236 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-6znnp"] Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.748389 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.751243 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.751418 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.751510 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.781127 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6znnp"] Jan 31 16:45:04 crc kubenswrapper[4730]: E0131 16:45:04.794623 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-48ldp ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-48ldp ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-6znnp" podUID="26ea626b-8547-483d-8ae5-4457c3cff6dd" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.806512 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-md2pb"] Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.807668 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.811819 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6znnp"] Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.824947 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-md2pb"] Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.830042 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.868098 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d8ac66-dbb1-4b02-844e-13123934241d-combined-ca-bundle\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.868153 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/62d8ac66-dbb1-4b02-844e-13123934241d-etc-swift\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.868364 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-scripts\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.868415 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/62d8ac66-dbb1-4b02-844e-13123934241d-swiftconf\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.868554 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-swiftconf\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.868596 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-combined-ca-bundle\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.868706 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.868761 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt46q\" (UniqueName: \"kubernetes.io/projected/62d8ac66-dbb1-4b02-844e-13123934241d-kube-api-access-qt46q\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.868811 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-dispersionconf\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.868880 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/26ea626b-8547-483d-8ae5-4457c3cff6dd-etc-swift\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.868906 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/62d8ac66-dbb1-4b02-844e-13123934241d-dispersionconf\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.868938 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/26ea626b-8547-483d-8ae5-4457c3cff6dd-ring-data-devices\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.868964 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26ea626b-8547-483d-8ae5-4457c3cff6dd-scripts\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.868978 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48ldp\" (UniqueName: \"kubernetes.io/projected/26ea626b-8547-483d-8ae5-4457c3cff6dd-kube-api-access-48ldp\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.970705 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/62d8ac66-dbb1-4b02-844e-13123934241d-etc-swift\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.970780 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-scripts\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.970818 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/62d8ac66-dbb1-4b02-844e-13123934241d-swiftconf\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.970849 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-swiftconf\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.970868 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-combined-ca-bundle\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.970906 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-etc-swift\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.970921 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.970947 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt46q\" (UniqueName: \"kubernetes.io/projected/62d8ac66-dbb1-4b02-844e-13123934241d-kube-api-access-qt46q\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.970995 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-dispersionconf\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.971024 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/26ea626b-8547-483d-8ae5-4457c3cff6dd-etc-swift\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.971043 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/62d8ac66-dbb1-4b02-844e-13123934241d-dispersionconf\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.971063 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/26ea626b-8547-483d-8ae5-4457c3cff6dd-ring-data-devices\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.971083 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26ea626b-8547-483d-8ae5-4457c3cff6dd-scripts\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.971100 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48ldp\" (UniqueName: \"kubernetes.io/projected/26ea626b-8547-483d-8ae5-4457c3cff6dd-kube-api-access-48ldp\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.971138 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d8ac66-dbb1-4b02-844e-13123934241d-combined-ca-bundle\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.971338 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/62d8ac66-dbb1-4b02-844e-13123934241d-etc-swift\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.971586 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-scripts\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: E0131 16:45:04.971646 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 16:45:04 crc kubenswrapper[4730]: E0131 16:45:04.971706 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 16:45:05.471685531 +0000 UTC m=+892.277742547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 16:45:04 crc kubenswrapper[4730]: E0131 16:45:04.971966 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 16:45:04 crc kubenswrapper[4730]: E0131 16:45:04.972010 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/26ea626b-8547-483d-8ae5-4457c3cff6dd-ring-data-devices podName:26ea626b-8547-483d-8ae5-4457c3cff6dd nodeName:}" failed. No retries permitted until 2026-01-31 16:45:05.471998919 +0000 UTC m=+892.278055835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/26ea626b-8547-483d-8ae5-4457c3cff6dd-ring-data-devices") pod "swift-ring-rebalance-6znnp" (UID: "26ea626b-8547-483d-8ae5-4457c3cff6dd") : configmap "swift-ring-config-data" not found Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.972191 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/26ea626b-8547-483d-8ae5-4457c3cff6dd-etc-swift\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.972815 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26ea626b-8547-483d-8ae5-4457c3cff6dd-scripts\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.977196 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3656b8f0-e1d3-4214-9c23-dd437a57f2ad-etc-swift\") pod \"swift-storage-0\" (UID: \"3656b8f0-e1d3-4214-9c23-dd437a57f2ad\") " pod="openstack/swift-storage-0" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.977527 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-swiftconf\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.979955 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-combined-ca-bundle\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.979984 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d8ac66-dbb1-4b02-844e-13123934241d-combined-ca-bundle\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.980180 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/62d8ac66-dbb1-4b02-844e-13123934241d-dispersionconf\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.980503 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-dispersionconf\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.981087 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/62d8ac66-dbb1-4b02-844e-13123934241d-swiftconf\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.993253 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48ldp\" (UniqueName: \"kubernetes.io/projected/26ea626b-8547-483d-8ae5-4457c3cff6dd-kube-api-access-48ldp\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:04 crc kubenswrapper[4730]: I0131 16:45:04.995034 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt46q\" (UniqueName: \"kubernetes.io/projected/62d8ac66-dbb1-4b02-844e-13123934241d-kube-api-access-qt46q\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.127371 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wnc5j" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.127614 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wnc5j" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.131858 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.180589 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wnc5j" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.266567 4730 generic.go:334] "Generic (PLEG): container finished" podID="c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9" containerID="2210f647d6cb9c4adcd8cc8f4de0212c32905733f5e201a15bbcdfb6cd82548b" exitCode=0 Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.266881 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2g75" event={"ID":"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9","Type":"ContainerDied","Data":"2210f647d6cb9c4adcd8cc8f4de0212c32905733f5e201a15bbcdfb6cd82548b"} Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.268145 4730 generic.go:334] "Generic (PLEG): container finished" podID="8209b289-3057-4a18-901a-5faa51042bc0" containerID="8956b15e2bb6c6d43c4b116c63b7c52bc49adc85a6ad9358b4065f82b99a8de5" exitCode=0 Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.268184 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" event={"ID":"8209b289-3057-4a18-901a-5faa51042bc0","Type":"ContainerDied","Data":"8956b15e2bb6c6d43c4b116c63b7c52bc49adc85a6ad9358b4065f82b99a8de5"} Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.269857 4730 generic.go:334] "Generic (PLEG): container finished" podID="a0d12ea3-22b7-4e96-9a8e-102e6473918c" containerID="946ef030de80241a3391fd0525a4951ccfed3c6479f50e4a1fc39f20f7c3f269" exitCode=0 Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.271029 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.271012 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6zhm" event={"ID":"a0d12ea3-22b7-4e96-9a8e-102e6473918c","Type":"ContainerDied","Data":"946ef030de80241a3391fd0525a4951ccfed3c6479f50e4a1fc39f20f7c3f269"} Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.290436 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.376310 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-combined-ca-bundle\") pod \"26ea626b-8547-483d-8ae5-4457c3cff6dd\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.376416 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-swiftconf\") pod \"26ea626b-8547-483d-8ae5-4457c3cff6dd\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.376440 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/26ea626b-8547-483d-8ae5-4457c3cff6dd-etc-swift\") pod \"26ea626b-8547-483d-8ae5-4457c3cff6dd\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.376481 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26ea626b-8547-483d-8ae5-4457c3cff6dd-scripts\") pod \"26ea626b-8547-483d-8ae5-4457c3cff6dd\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.376542 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-dispersionconf\") pod \"26ea626b-8547-483d-8ae5-4457c3cff6dd\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.376579 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48ldp\" (UniqueName: \"kubernetes.io/projected/26ea626b-8547-483d-8ae5-4457c3cff6dd-kube-api-access-48ldp\") pod \"26ea626b-8547-483d-8ae5-4457c3cff6dd\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.377213 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ea626b-8547-483d-8ae5-4457c3cff6dd-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "26ea626b-8547-483d-8ae5-4457c3cff6dd" (UID: "26ea626b-8547-483d-8ae5-4457c3cff6dd"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.378157 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26ea626b-8547-483d-8ae5-4457c3cff6dd-scripts" (OuterVolumeSpecName: "scripts") pod "26ea626b-8547-483d-8ae5-4457c3cff6dd" (UID: "26ea626b-8547-483d-8ae5-4457c3cff6dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.381609 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "26ea626b-8547-483d-8ae5-4457c3cff6dd" (UID: "26ea626b-8547-483d-8ae5-4457c3cff6dd"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.383253 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "26ea626b-8547-483d-8ae5-4457c3cff6dd" (UID: "26ea626b-8547-483d-8ae5-4457c3cff6dd"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.383487 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26ea626b-8547-483d-8ae5-4457c3cff6dd" (UID: "26ea626b-8547-483d-8ae5-4457c3cff6dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.392059 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ea626b-8547-483d-8ae5-4457c3cff6dd-kube-api-access-48ldp" (OuterVolumeSpecName: "kube-api-access-48ldp") pod "26ea626b-8547-483d-8ae5-4457c3cff6dd" (UID: "26ea626b-8547-483d-8ae5-4457c3cff6dd"). InnerVolumeSpecName "kube-api-access-48ldp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.478241 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/26ea626b-8547-483d-8ae5-4457c3cff6dd-ring-data-devices\") pod \"swift-ring-rebalance-6znnp\" (UID: \"26ea626b-8547-483d-8ae5-4457c3cff6dd\") " pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.478365 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.478411 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.478423 4730 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.478433 4730 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/26ea626b-8547-483d-8ae5-4457c3cff6dd-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.478441 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26ea626b-8547-483d-8ae5-4457c3cff6dd-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.478450 4730 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/26ea626b-8547-483d-8ae5-4457c3cff6dd-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.478459 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48ldp\" (UniqueName: \"kubernetes.io/projected/26ea626b-8547-483d-8ae5-4457c3cff6dd-kube-api-access-48ldp\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:05 crc kubenswrapper[4730]: E0131 16:45:05.478519 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 16:45:05 crc kubenswrapper[4730]: E0131 16:45:05.478568 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 16:45:06.478552806 +0000 UTC m=+893.284609722 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 16:45:05 crc kubenswrapper[4730]: E0131 16:45:05.478597 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 16:45:05 crc kubenswrapper[4730]: E0131 16:45:05.478615 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/26ea626b-8547-483d-8ae5-4457c3cff6dd-ring-data-devices podName:26ea626b-8547-483d-8ae5-4457c3cff6dd nodeName:}" failed. No retries permitted until 2026-01-31 16:45:06.478609838 +0000 UTC m=+893.284666754 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/26ea626b-8547-483d-8ae5-4457c3cff6dd-ring-data-devices") pod "swift-ring-rebalance-6znnp" (UID: "26ea626b-8547-483d-8ae5-4457c3cff6dd") : configmap "swift-ring-config-data" not found Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.532913 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-66hvq" podUID="a81eb20f-04f9-4f66-b19a-19cd06c28329" containerName="registry-server" probeResult="failure" output=< Jan 31 16:45:05 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Jan 31 16:45:05 crc kubenswrapper[4730]: > Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.688190 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r2g75" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.783080 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-catalog-content\") pod \"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9\" (UID: \"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9\") " Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.783144 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-utilities\") pod \"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9\" (UID: \"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9\") " Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.783297 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm9h6\" (UniqueName: \"kubernetes.io/projected/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-kube-api-access-lm9h6\") pod \"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9\" (UID: \"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9\") " Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.783651 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-utilities" (OuterVolumeSpecName: "utilities") pod "c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9" (UID: "c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.789181 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-kube-api-access-lm9h6" (OuterVolumeSpecName: "kube-api-access-lm9h6") pod "c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9" (UID: "c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9"). InnerVolumeSpecName "kube-api-access-lm9h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:05 crc kubenswrapper[4730]: W0131 16:45:05.813371 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3656b8f0_e1d3_4214_9c23_dd437a57f2ad.slice/crio-5479979378d2c451723f2388695fdff1e27a7dd0c10d30b7f4b0d0dc2cffd112 WatchSource:0}: Error finding container 5479979378d2c451723f2388695fdff1e27a7dd0c10d30b7f4b0d0dc2cffd112: Status 404 returned error can't find the container with id 5479979378d2c451723f2388695fdff1e27a7dd0c10d30b7f4b0d0dc2cffd112 Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.813601 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.849398 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9" (UID: "c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.885600 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm9h6\" (UniqueName: \"kubernetes.io/projected/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-kube-api-access-lm9h6\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.885636 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.885645 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:05 crc kubenswrapper[4730]: I0131 16:45:05.973358 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.077038 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.278076 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6zhm" event={"ID":"a0d12ea3-22b7-4e96-9a8e-102e6473918c","Type":"ContainerStarted","Data":"4ce22db72fd68f081b18f38d33da58e1a581b8c9432572070834b71013c8c3ab"} Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.280479 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2g75" event={"ID":"c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9","Type":"ContainerDied","Data":"493a09d06ba5b8a509563b7cbafdc69915276ee02c5c5738e83ffa8c3e431673"} Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.280506 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r2g75" Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.280540 4730 scope.go:117] "RemoveContainer" containerID="2210f647d6cb9c4adcd8cc8f4de0212c32905733f5e201a15bbcdfb6cd82548b" Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.281546 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"5479979378d2c451723f2388695fdff1e27a7dd0c10d30b7f4b0d0dc2cffd112"} Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.282056 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6znnp" Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.408947 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6znnp"] Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.415542 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-6znnp"] Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.483898 4730 scope.go:117] "RemoveContainer" containerID="a6494b5417e344d46740e3882590bc379665a20495f2ff07307610a4c7f3354c" Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.500756 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ea626b-8547-483d-8ae5-4457c3cff6dd" path="/var/lib/kubelet/pods/26ea626b-8547-483d-8ae5-4457c3cff6dd/volumes" Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.501200 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r2g75"] Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.501225 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r2g75"] Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.503236 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.503327 4730 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/26ea626b-8547-483d-8ae5-4457c3cff6dd-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:06 crc kubenswrapper[4730]: E0131 16:45:06.503403 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 16:45:06 crc kubenswrapper[4730]: E0131 16:45:06.503449 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 16:45:08.503434444 +0000 UTC m=+895.309491360 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.531392 4730 scope.go:117] "RemoveContainer" containerID="4b6fadd823c9d13b94793c06366ea4163d92b71fa49264142cb337d09feb29c5" Jan 31 16:45:06 crc kubenswrapper[4730]: E0131 16:45:06.612631 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26ea626b_8547_483d_8ae5_4457c3cff6dd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8d1c9b3_f51e_4f3f_a2f4_e7d7a43f17a9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0d12ea3_22b7_4e96_9a8e_102e6473918c.slice/crio-4ce22db72fd68f081b18f38d33da58e1a581b8c9432572070834b71013c8c3ab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0d12ea3_22b7_4e96_9a8e_102e6473918c.slice/crio-conmon-4ce22db72fd68f081b18f38d33da58e1a581b8c9432572070834b71013c8c3ab.scope\": RecentStats: unable to find data in memory cache]" Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.777252 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.915517 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8209b289-3057-4a18-901a-5faa51042bc0-secret-volume\") pod \"8209b289-3057-4a18-901a-5faa51042bc0\" (UID: \"8209b289-3057-4a18-901a-5faa51042bc0\") " Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.915621 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8209b289-3057-4a18-901a-5faa51042bc0-config-volume\") pod \"8209b289-3057-4a18-901a-5faa51042bc0\" (UID: \"8209b289-3057-4a18-901a-5faa51042bc0\") " Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.915717 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm9pg\" (UniqueName: \"kubernetes.io/projected/8209b289-3057-4a18-901a-5faa51042bc0-kube-api-access-sm9pg\") pod \"8209b289-3057-4a18-901a-5faa51042bc0\" (UID: \"8209b289-3057-4a18-901a-5faa51042bc0\") " Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.917097 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8209b289-3057-4a18-901a-5faa51042bc0-config-volume" (OuterVolumeSpecName: "config-volume") pod "8209b289-3057-4a18-901a-5faa51042bc0" (UID: "8209b289-3057-4a18-901a-5faa51042bc0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.922722 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8209b289-3057-4a18-901a-5faa51042bc0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8209b289-3057-4a18-901a-5faa51042bc0" (UID: "8209b289-3057-4a18-901a-5faa51042bc0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:45:06 crc kubenswrapper[4730]: I0131 16:45:06.931515 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8209b289-3057-4a18-901a-5faa51042bc0-kube-api-access-sm9pg" (OuterVolumeSpecName: "kube-api-access-sm9pg") pod "8209b289-3057-4a18-901a-5faa51042bc0" (UID: "8209b289-3057-4a18-901a-5faa51042bc0"). InnerVolumeSpecName "kube-api-access-sm9pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:07 crc kubenswrapper[4730]: I0131 16:45:07.017491 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm9pg\" (UniqueName: \"kubernetes.io/projected/8209b289-3057-4a18-901a-5faa51042bc0-kube-api-access-sm9pg\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:07 crc kubenswrapper[4730]: I0131 16:45:07.019611 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8209b289-3057-4a18-901a-5faa51042bc0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:07 crc kubenswrapper[4730]: I0131 16:45:07.019629 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8209b289-3057-4a18-901a-5faa51042bc0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:07 crc kubenswrapper[4730]: I0131 16:45:07.130963 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:45:07 crc kubenswrapper[4730]: I0131 16:45:07.296420 4730 generic.go:334] "Generic (PLEG): container finished" podID="a0d12ea3-22b7-4e96-9a8e-102e6473918c" containerID="4ce22db72fd68f081b18f38d33da58e1a581b8c9432572070834b71013c8c3ab" exitCode=0 Jan 31 16:45:07 crc kubenswrapper[4730]: I0131 16:45:07.296484 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6zhm" event={"ID":"a0d12ea3-22b7-4e96-9a8e-102e6473918c","Type":"ContainerDied","Data":"4ce22db72fd68f081b18f38d33da58e1a581b8c9432572070834b71013c8c3ab"} Jan 31 16:45:07 crc kubenswrapper[4730]: I0131 16:45:07.308260 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" event={"ID":"8209b289-3057-4a18-901a-5faa51042bc0","Type":"ContainerDied","Data":"e0c2d8a3cf164d771198c760cc065f3d7b16c1a9448de43a0538563391686ab1"} Jan 31 16:45:07 crc kubenswrapper[4730]: I0131 16:45:07.308523 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0c2d8a3cf164d771198c760cc065f3d7b16c1a9448de43a0538563391686ab1" Jan 31 16:45:07 crc kubenswrapper[4730]: I0131 16:45:07.308317 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497965-497zd" Jan 31 16:45:08 crc kubenswrapper[4730]: I0131 16:45:08.474698 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9" path="/var/lib/kubelet/pods/c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9/volumes" Jan 31 16:45:08 crc kubenswrapper[4730]: I0131 16:45:08.550218 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:08 crc kubenswrapper[4730]: E0131 16:45:08.550319 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 16:45:08 crc kubenswrapper[4730]: E0131 16:45:08.550371 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 16:45:12.550355448 +0000 UTC m=+899.356412364 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 16:45:09 crc kubenswrapper[4730]: I0131 16:45:09.326749 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"c9f4ee519a0ca08568068e912f2c9da4115129c89e9df574ff6ff7f3e8045c1d"} Jan 31 16:45:09 crc kubenswrapper[4730]: I0131 16:45:09.327105 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"b7e55442d57d541282e5b289b91b104d6e090f33bdb8da9b66812e78b739f018"} Jan 31 16:45:09 crc kubenswrapper[4730]: I0131 16:45:09.329917 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6zhm" event={"ID":"a0d12ea3-22b7-4e96-9a8e-102e6473918c","Type":"ContainerStarted","Data":"7d3a448ecfabcc16daeeab165d3fe37632b4d23de729337366f9054520ae2722"} Jan 31 16:45:09 crc kubenswrapper[4730]: I0131 16:45:09.349930 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x6zhm" podStartSLOduration=4.8556136930000005 podStartE2EDuration="8.349916492s" podCreationTimestamp="2026-01-31 16:45:01 +0000 UTC" firstStartedPulling="2026-01-31 16:45:05.279871005 +0000 UTC m=+892.085927921" lastFinishedPulling="2026-01-31 16:45:08.774173804 +0000 UTC m=+895.580230720" observedRunningTime="2026-01-31 16:45:09.344903538 +0000 UTC m=+896.150960454" watchObservedRunningTime="2026-01-31 16:45:09.349916492 +0000 UTC m=+896.155973408" Jan 31 16:45:09 crc kubenswrapper[4730]: I0131 16:45:09.830991 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:45:09 crc kubenswrapper[4730]: I0131 16:45:09.883226 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9tzhb"] Jan 31 16:45:09 crc kubenswrapper[4730]: I0131 16:45:09.883491 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" podUID="738cd861-a897-43d9-b336-cbb6afca4e96" containerName="dnsmasq-dns" containerID="cri-o://ee6915991fcfc34555b471a5f43111111042fabaf775a3f96c89500dc2f0c7a2" gracePeriod=10 Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.133523 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.371496 4730 generic.go:334] "Generic (PLEG): container finished" podID="738cd861-a897-43d9-b336-cbb6afca4e96" containerID="ee6915991fcfc34555b471a5f43111111042fabaf775a3f96c89500dc2f0c7a2" exitCode=0 Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.371554 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" event={"ID":"738cd861-a897-43d9-b336-cbb6afca4e96","Type":"ContainerDied","Data":"ee6915991fcfc34555b471a5f43111111042fabaf775a3f96c89500dc2f0c7a2"} Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.379932 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"1e1a95ec4645f70e29bd4924ab70475d6e1b67897e5f5d142236988937d3a951"} Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.379984 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"ee2ff88bb853729e06d0e49859e8c8d41e38412b0631251e47352875b9ccd940"} Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.588014 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.594263 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.694318 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-ovsdbserver-nb\") pod \"738cd861-a897-43d9-b336-cbb6afca4e96\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.694378 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8796q\" (UniqueName: \"kubernetes.io/projected/738cd861-a897-43d9-b336-cbb6afca4e96-kube-api-access-8796q\") pod \"738cd861-a897-43d9-b336-cbb6afca4e96\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.694479 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-config\") pod \"738cd861-a897-43d9-b336-cbb6afca4e96\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.694608 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-ovsdbserver-sb\") pod \"738cd861-a897-43d9-b336-cbb6afca4e96\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.694652 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-dns-svc\") pod \"738cd861-a897-43d9-b336-cbb6afca4e96\" (UID: \"738cd861-a897-43d9-b336-cbb6afca4e96\") " Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.724492 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/738cd861-a897-43d9-b336-cbb6afca4e96-kube-api-access-8796q" (OuterVolumeSpecName: "kube-api-access-8796q") pod "738cd861-a897-43d9-b336-cbb6afca4e96" (UID: "738cd861-a897-43d9-b336-cbb6afca4e96"). InnerVolumeSpecName "kube-api-access-8796q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.787559 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-config" (OuterVolumeSpecName: "config") pod "738cd861-a897-43d9-b336-cbb6afca4e96" (UID: "738cd861-a897-43d9-b336-cbb6afca4e96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.796702 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8796q\" (UniqueName: \"kubernetes.io/projected/738cd861-a897-43d9-b336-cbb6afca4e96-kube-api-access-8796q\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.796730 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.812314 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "738cd861-a897-43d9-b336-cbb6afca4e96" (UID: "738cd861-a897-43d9-b336-cbb6afca4e96"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.827270 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "738cd861-a897-43d9-b336-cbb6afca4e96" (UID: "738cd861-a897-43d9-b336-cbb6afca4e96"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.836081 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "738cd861-a897-43d9-b336-cbb6afca4e96" (UID: "738cd861-a897-43d9-b336-cbb6afca4e96"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.898289 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.898322 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:10 crc kubenswrapper[4730]: I0131 16:45:10.898333 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/738cd861-a897-43d9-b336-cbb6afca4e96-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:11 crc kubenswrapper[4730]: I0131 16:45:11.380530 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" event={"ID":"738cd861-a897-43d9-b336-cbb6afca4e96","Type":"ContainerDied","Data":"70810ef69032ac57d86b056f23d60ede4650d89c2c133005a14a56b41cf4c2ac"} Jan 31 16:45:11 crc kubenswrapper[4730]: I0131 16:45:11.380590 4730 scope.go:117] "RemoveContainer" containerID="ee6915991fcfc34555b471a5f43111111042fabaf775a3f96c89500dc2f0c7a2" Jan 31 16:45:11 crc kubenswrapper[4730]: I0131 16:45:11.380544 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9tzhb" Jan 31 16:45:11 crc kubenswrapper[4730]: I0131 16:45:11.382960 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="c9f4ee519a0ca08568068e912f2c9da4115129c89e9df574ff6ff7f3e8045c1d" exitCode=1 Jan 31 16:45:11 crc kubenswrapper[4730]: I0131 16:45:11.382986 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"c9f4ee519a0ca08568068e912f2c9da4115129c89e9df574ff6ff7f3e8045c1d"} Jan 31 16:45:11 crc kubenswrapper[4730]: I0131 16:45:11.399995 4730 scope.go:117] "RemoveContainer" containerID="94609f806e4fddcc74454716985348ee449def22a0a71eb0a4fa170afbca4e00" Jan 31 16:45:11 crc kubenswrapper[4730]: I0131 16:45:11.418121 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9tzhb"] Jan 31 16:45:11 crc kubenswrapper[4730]: I0131 16:45:11.420254 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9tzhb"] Jan 31 16:45:12 crc kubenswrapper[4730]: I0131 16:45:12.269078 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x6zhm" Jan 31 16:45:12 crc kubenswrapper[4730]: I0131 16:45:12.269382 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x6zhm" Jan 31 16:45:12 crc kubenswrapper[4730]: I0131 16:45:12.375891 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x6zhm" Jan 31 16:45:12 crc kubenswrapper[4730]: I0131 16:45:12.404587 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="9c15d63ad8f42443e6fb812f50cad6005da98449bf12408c6e6fcc99e744c4a3" exitCode=1 Jan 31 16:45:12 crc kubenswrapper[4730]: I0131 16:45:12.404642 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"ee85bc5fc59c3f0b6790a01a8bec9adde51e9224843a4dc959082405198dc125"} Jan 31 16:45:12 crc kubenswrapper[4730]: I0131 16:45:12.404665 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"544775611dc14e71e1508d7ea6b185cdbdf851ced80cc3b9491a74d99c1d3706"} Jan 31 16:45:12 crc kubenswrapper[4730]: I0131 16:45:12.404678 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"9c15d63ad8f42443e6fb812f50cad6005da98449bf12408c6e6fcc99e744c4a3"} Jan 31 16:45:12 crc kubenswrapper[4730]: I0131 16:45:12.404690 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"53f6b64e22104c46965560952b2d634299f172dc2f049e9e356126ea1927816a"} Jan 31 16:45:12 crc kubenswrapper[4730]: I0131 16:45:12.473576 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="738cd861-a897-43d9-b336-cbb6afca4e96" path="/var/lib/kubelet/pods/738cd861-a897-43d9-b336-cbb6afca4e96/volumes" Jan 31 16:45:12 crc kubenswrapper[4730]: I0131 16:45:12.626963 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:12 crc kubenswrapper[4730]: E0131 16:45:12.627117 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 16:45:12 crc kubenswrapper[4730]: E0131 16:45:12.627172 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 16:45:20.627154823 +0000 UTC m=+907.433211749 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 16:45:13 crc kubenswrapper[4730]: I0131 16:45:13.421572 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"9a708d3aff0a8448dc05943759676f51dcd2833ef9e94e1bda3561cb5e7b5a0e"} Jan 31 16:45:13 crc kubenswrapper[4730]: I0131 16:45:13.421929 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"301b793298393aea071821cc7438a9456c90a908caa083ab32a803d23cd647db"} Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.431739 4730 generic.go:334] "Generic (PLEG): container finished" podID="3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda" containerID="4d382f6bf7143cdd0df6cad985283e26d4208dccdf17de8175042fb502777962" exitCode=0 Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.431855 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda","Type":"ContainerDied","Data":"4d382f6bf7143cdd0df6cad985283e26d4208dccdf17de8175042fb502777962"} Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.452973 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"435927c74b967706fe7ebdbf1eac2e63fbd02dfb571e581ab2e5e21f1b4671f8"} Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.453035 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"bb0512c85c4a3d196cba92fc641968bb022c2778b2effc7448c5bb82ca93f229"} Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.518416 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ctbfr"] Jan 31 16:45:14 crc kubenswrapper[4730]: E0131 16:45:14.518676 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8209b289-3057-4a18-901a-5faa51042bc0" containerName="collect-profiles" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.518688 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8209b289-3057-4a18-901a-5faa51042bc0" containerName="collect-profiles" Jan 31 16:45:14 crc kubenswrapper[4730]: E0131 16:45:14.518700 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9" containerName="extract-utilities" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.518706 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9" containerName="extract-utilities" Jan 31 16:45:14 crc kubenswrapper[4730]: E0131 16:45:14.518713 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738cd861-a897-43d9-b336-cbb6afca4e96" containerName="init" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.518719 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="738cd861-a897-43d9-b336-cbb6afca4e96" containerName="init" Jan 31 16:45:14 crc kubenswrapper[4730]: E0131 16:45:14.518736 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9" containerName="registry-server" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.518742 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9" containerName="registry-server" Jan 31 16:45:14 crc kubenswrapper[4730]: E0131 16:45:14.518758 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738cd861-a897-43d9-b336-cbb6afca4e96" containerName="dnsmasq-dns" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.518764 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="738cd861-a897-43d9-b336-cbb6afca4e96" containerName="dnsmasq-dns" Jan 31 16:45:14 crc kubenswrapper[4730]: E0131 16:45:14.518781 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9" containerName="extract-content" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.518787 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9" containerName="extract-content" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.518948 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d1c9b3-f51e-4f3f-a2f4-e7d7a43f17a9" containerName="registry-server" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.518963 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8209b289-3057-4a18-901a-5faa51042bc0" containerName="collect-profiles" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.518976 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="738cd861-a897-43d9-b336-cbb6afca4e96" containerName="dnsmasq-dns" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.519589 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ctbfr"] Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.519659 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ctbfr" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.521326 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.649448 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d96c3ae0-9cf1-40bf-9ba2-89066c04c975-operator-scripts\") pod \"root-account-create-update-ctbfr\" (UID: \"d96c3ae0-9cf1-40bf-9ba2-89066c04c975\") " pod="openstack/root-account-create-update-ctbfr" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.649937 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkfxs\" (UniqueName: \"kubernetes.io/projected/d96c3ae0-9cf1-40bf-9ba2-89066c04c975-kube-api-access-mkfxs\") pod \"root-account-create-update-ctbfr\" (UID: \"d96c3ae0-9cf1-40bf-9ba2-89066c04c975\") " pod="openstack/root-account-create-update-ctbfr" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.751653 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkfxs\" (UniqueName: \"kubernetes.io/projected/d96c3ae0-9cf1-40bf-9ba2-89066c04c975-kube-api-access-mkfxs\") pod \"root-account-create-update-ctbfr\" (UID: \"d96c3ae0-9cf1-40bf-9ba2-89066c04c975\") " pod="openstack/root-account-create-update-ctbfr" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.751707 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d96c3ae0-9cf1-40bf-9ba2-89066c04c975-operator-scripts\") pod \"root-account-create-update-ctbfr\" (UID: \"d96c3ae0-9cf1-40bf-9ba2-89066c04c975\") " pod="openstack/root-account-create-update-ctbfr" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.752568 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d96c3ae0-9cf1-40bf-9ba2-89066c04c975-operator-scripts\") pod \"root-account-create-update-ctbfr\" (UID: \"d96c3ae0-9cf1-40bf-9ba2-89066c04c975\") " pod="openstack/root-account-create-update-ctbfr" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.772074 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkfxs\" (UniqueName: \"kubernetes.io/projected/d96c3ae0-9cf1-40bf-9ba2-89066c04c975-kube-api-access-mkfxs\") pod \"root-account-create-update-ctbfr\" (UID: \"d96c3ae0-9cf1-40bf-9ba2-89066c04c975\") " pod="openstack/root-account-create-update-ctbfr" Jan 31 16:45:14 crc kubenswrapper[4730]: I0131 16:45:14.941153 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ctbfr" Jan 31 16:45:15 crc kubenswrapper[4730]: I0131 16:45:15.193463 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wnc5j" Jan 31 16:45:15 crc kubenswrapper[4730]: I0131 16:45:15.206143 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ctbfr"] Jan 31 16:45:15 crc kubenswrapper[4730]: I0131 16:45:15.238245 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wnc5j"] Jan 31 16:45:15 crc kubenswrapper[4730]: W0131 16:45:15.245302 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd96c3ae0_9cf1_40bf_9ba2_89066c04c975.slice/crio-7d51a9681830051359ecad90c6a40a1e6b37a71982e3cda18b4453c103e7be6e WatchSource:0}: Error finding container 7d51a9681830051359ecad90c6a40a1e6b37a71982e3cda18b4453c103e7be6e: Status 404 returned error can't find the container with id 7d51a9681830051359ecad90c6a40a1e6b37a71982e3cda18b4453c103e7be6e Jan 31 16:45:15 crc kubenswrapper[4730]: I0131 16:45:15.259256 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 31 16:45:15 crc kubenswrapper[4730]: I0131 16:45:15.465197 4730 generic.go:334] "Generic (PLEG): container finished" podID="696f3c30-383d-4a98-ab73-bd90571c8fac" containerID="a6cc9a21447d28c0904a7fed6d8eda4afbac81c2529b1fa12165f1a5533ad371" exitCode=0 Jan 31 16:45:15 crc kubenswrapper[4730]: I0131 16:45:15.465270 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"696f3c30-383d-4a98-ab73-bd90571c8fac","Type":"ContainerDied","Data":"a6cc9a21447d28c0904a7fed6d8eda4afbac81c2529b1fa12165f1a5533ad371"} Jan 31 16:45:15 crc kubenswrapper[4730]: I0131 16:45:15.468339 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda","Type":"ContainerStarted","Data":"561e69eca9290dbf48200f5a2d60286ff0c6ffbc1025230e6522c7054bb026c4"} Jan 31 16:45:15 crc kubenswrapper[4730]: I0131 16:45:15.468750 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 31 16:45:15 crc kubenswrapper[4730]: I0131 16:45:15.469871 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wnc5j" podUID="8b6676c8-c57e-4081-b77c-47e5a534abb0" containerName="registry-server" containerID="cri-o://7fe502529bbad1a3216beae3c6a7646a562e27dcdff23d620d84ff4c8bc9e12c" gracePeriod=2 Jan 31 16:45:15 crc kubenswrapper[4730]: I0131 16:45:15.470108 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ctbfr" event={"ID":"d96c3ae0-9cf1-40bf-9ba2-89066c04c975","Type":"ContainerStarted","Data":"7d51a9681830051359ecad90c6a40a1e6b37a71982e3cda18b4453c103e7be6e"} Jan 31 16:45:15 crc kubenswrapper[4730]: I0131 16:45:15.532753 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.148202021 podStartE2EDuration="53.532739681s" podCreationTimestamp="2026-01-31 16:44:22 +0000 UTC" firstStartedPulling="2026-01-31 16:44:29.739684656 +0000 UTC m=+856.545741572" lastFinishedPulling="2026-01-31 16:44:41.124222316 +0000 UTC m=+867.930279232" observedRunningTime="2026-01-31 16:45:15.528225469 +0000 UTC m=+902.334282385" watchObservedRunningTime="2026-01-31 16:45:15.532739681 +0000 UTC m=+902.338796587" Jan 31 16:45:15 crc kubenswrapper[4730]: I0131 16:45:15.600064 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-66hvq" podUID="a81eb20f-04f9-4f66-b19a-19cd06c28329" containerName="registry-server" probeResult="failure" output=< Jan 31 16:45:15 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Jan 31 16:45:15 crc kubenswrapper[4730]: > Jan 31 16:45:16 crc kubenswrapper[4730]: I0131 16:45:16.481982 4730 generic.go:334] "Generic (PLEG): container finished" podID="8b6676c8-c57e-4081-b77c-47e5a534abb0" containerID="7fe502529bbad1a3216beae3c6a7646a562e27dcdff23d620d84ff4c8bc9e12c" exitCode=0 Jan 31 16:45:16 crc kubenswrapper[4730]: I0131 16:45:16.482111 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnc5j" event={"ID":"8b6676c8-c57e-4081-b77c-47e5a534abb0","Type":"ContainerDied","Data":"7fe502529bbad1a3216beae3c6a7646a562e27dcdff23d620d84ff4c8bc9e12c"} Jan 31 16:45:16 crc kubenswrapper[4730]: I0131 16:45:16.920883 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-b7vmm"] Jan 31 16:45:16 crc kubenswrapper[4730]: I0131 16:45:16.930564 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-b7vmm" Jan 31 16:45:16 crc kubenswrapper[4730]: I0131 16:45:16.951538 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-b7vmm"] Jan 31 16:45:16 crc kubenswrapper[4730]: I0131 16:45:16.990379 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dccn6\" (UniqueName: \"kubernetes.io/projected/b5dc6e44-d1e4-4d5e-a83e-f2223e70f013-kube-api-access-dccn6\") pod \"keystone-db-create-b7vmm\" (UID: \"b5dc6e44-d1e4-4d5e-a83e-f2223e70f013\") " pod="openstack/keystone-db-create-b7vmm" Jan 31 16:45:16 crc kubenswrapper[4730]: I0131 16:45:16.990467 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5dc6e44-d1e4-4d5e-a83e-f2223e70f013-operator-scripts\") pod \"keystone-db-create-b7vmm\" (UID: \"b5dc6e44-d1e4-4d5e-a83e-f2223e70f013\") " pod="openstack/keystone-db-create-b7vmm" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.019743 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a81a-account-create-update-482zr"] Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.021048 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a81a-account-create-update-482zr" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.023050 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wnc5j" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.045895 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.052945 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a81a-account-create-update-482zr"] Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.091396 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b6676c8-c57e-4081-b77c-47e5a534abb0-utilities\") pod \"8b6676c8-c57e-4081-b77c-47e5a534abb0\" (UID: \"8b6676c8-c57e-4081-b77c-47e5a534abb0\") " Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.091451 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b6676c8-c57e-4081-b77c-47e5a534abb0-catalog-content\") pod \"8b6676c8-c57e-4081-b77c-47e5a534abb0\" (UID: \"8b6676c8-c57e-4081-b77c-47e5a534abb0\") " Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.091594 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz7kk\" (UniqueName: \"kubernetes.io/projected/8b6676c8-c57e-4081-b77c-47e5a534abb0-kube-api-access-vz7kk\") pod \"8b6676c8-c57e-4081-b77c-47e5a534abb0\" (UID: \"8b6676c8-c57e-4081-b77c-47e5a534abb0\") " Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.091769 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c2561b-5ed5-4508-b5a9-b4179c91ac72-operator-scripts\") pod \"keystone-a81a-account-create-update-482zr\" (UID: \"45c2561b-5ed5-4508-b5a9-b4179c91ac72\") " pod="openstack/keystone-a81a-account-create-update-482zr" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.091843 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dccn6\" (UniqueName: \"kubernetes.io/projected/b5dc6e44-d1e4-4d5e-a83e-f2223e70f013-kube-api-access-dccn6\") pod \"keystone-db-create-b7vmm\" (UID: \"b5dc6e44-d1e4-4d5e-a83e-f2223e70f013\") " pod="openstack/keystone-db-create-b7vmm" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.091900 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfvqh\" (UniqueName: \"kubernetes.io/projected/45c2561b-5ed5-4508-b5a9-b4179c91ac72-kube-api-access-sfvqh\") pod \"keystone-a81a-account-create-update-482zr\" (UID: \"45c2561b-5ed5-4508-b5a9-b4179c91ac72\") " pod="openstack/keystone-a81a-account-create-update-482zr" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.091933 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5dc6e44-d1e4-4d5e-a83e-f2223e70f013-operator-scripts\") pod \"keystone-db-create-b7vmm\" (UID: \"b5dc6e44-d1e4-4d5e-a83e-f2223e70f013\") " pod="openstack/keystone-db-create-b7vmm" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.092087 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b6676c8-c57e-4081-b77c-47e5a534abb0-utilities" (OuterVolumeSpecName: "utilities") pod "8b6676c8-c57e-4081-b77c-47e5a534abb0" (UID: "8b6676c8-c57e-4081-b77c-47e5a534abb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.092687 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5dc6e44-d1e4-4d5e-a83e-f2223e70f013-operator-scripts\") pod \"keystone-db-create-b7vmm\" (UID: \"b5dc6e44-d1e4-4d5e-a83e-f2223e70f013\") " pod="openstack/keystone-db-create-b7vmm" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.098866 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b6676c8-c57e-4081-b77c-47e5a534abb0-kube-api-access-vz7kk" (OuterVolumeSpecName: "kube-api-access-vz7kk") pod "8b6676c8-c57e-4081-b77c-47e5a534abb0" (UID: "8b6676c8-c57e-4081-b77c-47e5a534abb0"). InnerVolumeSpecName "kube-api-access-vz7kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.119755 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dccn6\" (UniqueName: \"kubernetes.io/projected/b5dc6e44-d1e4-4d5e-a83e-f2223e70f013-kube-api-access-dccn6\") pod \"keystone-db-create-b7vmm\" (UID: \"b5dc6e44-d1e4-4d5e-a83e-f2223e70f013\") " pod="openstack/keystone-db-create-b7vmm" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.140703 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b6676c8-c57e-4081-b77c-47e5a534abb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b6676c8-c57e-4081-b77c-47e5a534abb0" (UID: "8b6676c8-c57e-4081-b77c-47e5a534abb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.193464 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfvqh\" (UniqueName: \"kubernetes.io/projected/45c2561b-5ed5-4508-b5a9-b4179c91ac72-kube-api-access-sfvqh\") pod \"keystone-a81a-account-create-update-482zr\" (UID: \"45c2561b-5ed5-4508-b5a9-b4179c91ac72\") " pod="openstack/keystone-a81a-account-create-update-482zr" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.193564 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c2561b-5ed5-4508-b5a9-b4179c91ac72-operator-scripts\") pod \"keystone-a81a-account-create-update-482zr\" (UID: \"45c2561b-5ed5-4508-b5a9-b4179c91ac72\") " pod="openstack/keystone-a81a-account-create-update-482zr" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.193604 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b6676c8-c57e-4081-b77c-47e5a534abb0-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.193614 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b6676c8-c57e-4081-b77c-47e5a534abb0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.193624 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz7kk\" (UniqueName: \"kubernetes.io/projected/8b6676c8-c57e-4081-b77c-47e5a534abb0-kube-api-access-vz7kk\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.194184 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c2561b-5ed5-4508-b5a9-b4179c91ac72-operator-scripts\") pod \"keystone-a81a-account-create-update-482zr\" (UID: \"45c2561b-5ed5-4508-b5a9-b4179c91ac72\") " pod="openstack/keystone-a81a-account-create-update-482zr" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.214479 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfvqh\" (UniqueName: \"kubernetes.io/projected/45c2561b-5ed5-4508-b5a9-b4179c91ac72-kube-api-access-sfvqh\") pod \"keystone-a81a-account-create-update-482zr\" (UID: \"45c2561b-5ed5-4508-b5a9-b4179c91ac72\") " pod="openstack/keystone-a81a-account-create-update-482zr" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.270216 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.276989 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-b7vmm" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.286479 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-v5qvf"] Jan 31 16:45:17 crc kubenswrapper[4730]: E0131 16:45:17.286883 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6676c8-c57e-4081-b77c-47e5a534abb0" containerName="extract-content" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.286967 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6676c8-c57e-4081-b77c-47e5a534abb0" containerName="extract-content" Jan 31 16:45:17 crc kubenswrapper[4730]: E0131 16:45:17.287042 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6676c8-c57e-4081-b77c-47e5a534abb0" containerName="registry-server" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.287095 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6676c8-c57e-4081-b77c-47e5a534abb0" containerName="registry-server" Jan 31 16:45:17 crc kubenswrapper[4730]: E0131 16:45:17.287163 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6676c8-c57e-4081-b77c-47e5a534abb0" containerName="extract-utilities" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.287218 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6676c8-c57e-4081-b77c-47e5a534abb0" containerName="extract-utilities" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.287426 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b6676c8-c57e-4081-b77c-47e5a534abb0" containerName="registry-server" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.287978 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v5qvf" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.317866 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v5qvf"] Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.344429 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a81a-account-create-update-482zr" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.396831 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31af8919-7a56-4384-9ee9-edf256738e2d-operator-scripts\") pod \"placement-db-create-v5qvf\" (UID: \"31af8919-7a56-4384-9ee9-edf256738e2d\") " pod="openstack/placement-db-create-v5qvf" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.396896 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptpqm\" (UniqueName: \"kubernetes.io/projected/31af8919-7a56-4384-9ee9-edf256738e2d-kube-api-access-ptpqm\") pod \"placement-db-create-v5qvf\" (UID: \"31af8919-7a56-4384-9ee9-edf256738e2d\") " pod="openstack/placement-db-create-v5qvf" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.440592 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d7a2-account-create-update-9gqnx"] Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.441517 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d7a2-account-create-update-9gqnx" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.445239 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.463919 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d7a2-account-create-update-9gqnx"] Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.498331 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d28b8af-a349-44fa-8e46-ec5c26389dff-operator-scripts\") pod \"placement-d7a2-account-create-update-9gqnx\" (UID: \"8d28b8af-a349-44fa-8e46-ec5c26389dff\") " pod="openstack/placement-d7a2-account-create-update-9gqnx" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.498383 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl8tp\" (UniqueName: \"kubernetes.io/projected/8d28b8af-a349-44fa-8e46-ec5c26389dff-kube-api-access-jl8tp\") pod \"placement-d7a2-account-create-update-9gqnx\" (UID: \"8d28b8af-a349-44fa-8e46-ec5c26389dff\") " pod="openstack/placement-d7a2-account-create-update-9gqnx" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.498524 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31af8919-7a56-4384-9ee9-edf256738e2d-operator-scripts\") pod \"placement-db-create-v5qvf\" (UID: \"31af8919-7a56-4384-9ee9-edf256738e2d\") " pod="openstack/placement-db-create-v5qvf" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.498580 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptpqm\" (UniqueName: \"kubernetes.io/projected/31af8919-7a56-4384-9ee9-edf256738e2d-kube-api-access-ptpqm\") pod \"placement-db-create-v5qvf\" (UID: \"31af8919-7a56-4384-9ee9-edf256738e2d\") " pod="openstack/placement-db-create-v5qvf" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.500649 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31af8919-7a56-4384-9ee9-edf256738e2d-operator-scripts\") pod \"placement-db-create-v5qvf\" (UID: \"31af8919-7a56-4384-9ee9-edf256738e2d\") " pod="openstack/placement-db-create-v5qvf" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.507894 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"696f3c30-383d-4a98-ab73-bd90571c8fac","Type":"ContainerStarted","Data":"300f6125802cf6f191391ef08ec3510f5db1caa84e0e9d3d3c74e880d3a3ce1b"} Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.513937 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.523453 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptpqm\" (UniqueName: \"kubernetes.io/projected/31af8919-7a56-4384-9ee9-edf256738e2d-kube-api-access-ptpqm\") pod \"placement-db-create-v5qvf\" (UID: \"31af8919-7a56-4384-9ee9-edf256738e2d\") " pod="openstack/placement-db-create-v5qvf" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.562789 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"23311eb774dc79a6ac21139a2ae7fe9049108d86958c6515fce67cb0751bbd89"} Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.562840 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"f86a624cad5544fabf3d57d44531b7f2dc3ab563fd259866309dc22a7f70061f"} Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.577887 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ctbfr" event={"ID":"d96c3ae0-9cf1-40bf-9ba2-89066c04c975","Type":"ContainerStarted","Data":"3abc831078ca0909ba2a0cc107f5b02749686c97c3a76725bf9d5dd930b49582"} Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.584699 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnc5j" event={"ID":"8b6676c8-c57e-4081-b77c-47e5a534abb0","Type":"ContainerDied","Data":"37cc6df0683a5eafe4b5274055be34721cbe0cb0bcb23423eabfa64498a63770"} Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.585052 4730 scope.go:117] "RemoveContainer" containerID="7fe502529bbad1a3216beae3c6a7646a562e27dcdff23d620d84ff4c8bc9e12c" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.585197 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wnc5j" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.602401 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d28b8af-a349-44fa-8e46-ec5c26389dff-operator-scripts\") pod \"placement-d7a2-account-create-update-9gqnx\" (UID: \"8d28b8af-a349-44fa-8e46-ec5c26389dff\") " pod="openstack/placement-d7a2-account-create-update-9gqnx" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.602456 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl8tp\" (UniqueName: \"kubernetes.io/projected/8d28b8af-a349-44fa-8e46-ec5c26389dff-kube-api-access-jl8tp\") pod \"placement-d7a2-account-create-update-9gqnx\" (UID: \"8d28b8af-a349-44fa-8e46-ec5c26389dff\") " pod="openstack/placement-d7a2-account-create-update-9gqnx" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.604345 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d28b8af-a349-44fa-8e46-ec5c26389dff-operator-scripts\") pod \"placement-d7a2-account-create-update-9gqnx\" (UID: \"8d28b8af-a349-44fa-8e46-ec5c26389dff\") " pod="openstack/placement-d7a2-account-create-update-9gqnx" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.608914 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v5qvf" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.612384 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-ctbfr" podStartSLOduration=3.612326518 podStartE2EDuration="3.612326518s" podCreationTimestamp="2026-01-31 16:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:45:17.599325464 +0000 UTC m=+904.405382380" watchObservedRunningTime="2026-01-31 16:45:17.612326518 +0000 UTC m=+904.418383444" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.622683 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=54.622667235 podStartE2EDuration="54.622667235s" podCreationTimestamp="2026-01-31 16:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:45:17.567217426 +0000 UTC m=+904.373274502" watchObservedRunningTime="2026-01-31 16:45:17.622667235 +0000 UTC m=+904.428724151" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.630133 4730 scope.go:117] "RemoveContainer" containerID="8f8776cb29cf894555e4f1ac088162722f24cf8a875d547a5720e3f33d9e62e0" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.656675 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl8tp\" (UniqueName: \"kubernetes.io/projected/8d28b8af-a349-44fa-8e46-ec5c26389dff-kube-api-access-jl8tp\") pod \"placement-d7a2-account-create-update-9gqnx\" (UID: \"8d28b8af-a349-44fa-8e46-ec5c26389dff\") " pod="openstack/placement-d7a2-account-create-update-9gqnx" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.681367 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wnc5j"] Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.705449 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wnc5j"] Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.721031 4730 scope.go:117] "RemoveContainer" containerID="9d57277fd097cd390c20c1c672607a13550085040ca00b1a964f9434cefce34b" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.762168 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d7a2-account-create-update-9gqnx" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.840322 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-sjqqh"] Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.841614 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sjqqh" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.851238 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-b7vmm"] Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.904093 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-sjqqh"] Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.922351 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9v7p\" (UniqueName: \"kubernetes.io/projected/b5c1ddc8-93ef-4228-aa5b-05989e77b3ac-kube-api-access-q9v7p\") pod \"glance-db-create-sjqqh\" (UID: \"b5c1ddc8-93ef-4228-aa5b-05989e77b3ac\") " pod="openstack/glance-db-create-sjqqh" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.922427 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5c1ddc8-93ef-4228-aa5b-05989e77b3ac-operator-scripts\") pod \"glance-db-create-sjqqh\" (UID: \"b5c1ddc8-93ef-4228-aa5b-05989e77b3ac\") " pod="openstack/glance-db-create-sjqqh" Jan 31 16:45:17 crc kubenswrapper[4730]: W0131 16:45:17.934834 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5dc6e44_d1e4_4d5e_a83e_f2223e70f013.slice/crio-3aa2fcd41ce8082c717e11372e41ac0869fba15a988aed80d6acdd225a32ceb9 WatchSource:0}: Error finding container 3aa2fcd41ce8082c717e11372e41ac0869fba15a988aed80d6acdd225a32ceb9: Status 404 returned error can't find the container with id 3aa2fcd41ce8082c717e11372e41ac0869fba15a988aed80d6acdd225a32ceb9 Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.980619 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1d8b-account-create-update-5d7p8"] Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.981495 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d8b-account-create-update-5d7p8" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.985527 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 31 16:45:17 crc kubenswrapper[4730]: I0131 16:45:17.996908 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1d8b-account-create-update-5d7p8"] Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.024290 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5c1ddc8-93ef-4228-aa5b-05989e77b3ac-operator-scripts\") pod \"glance-db-create-sjqqh\" (UID: \"b5c1ddc8-93ef-4228-aa5b-05989e77b3ac\") " pod="openstack/glance-db-create-sjqqh" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.024646 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vjj4\" (UniqueName: \"kubernetes.io/projected/2964865c-12e5-4d18-bd62-16629f4a1090-kube-api-access-8vjj4\") pod \"glance-1d8b-account-create-update-5d7p8\" (UID: \"2964865c-12e5-4d18-bd62-16629f4a1090\") " pod="openstack/glance-1d8b-account-create-update-5d7p8" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.024709 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2964865c-12e5-4d18-bd62-16629f4a1090-operator-scripts\") pod \"glance-1d8b-account-create-update-5d7p8\" (UID: \"2964865c-12e5-4d18-bd62-16629f4a1090\") " pod="openstack/glance-1d8b-account-create-update-5d7p8" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.024757 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9v7p\" (UniqueName: \"kubernetes.io/projected/b5c1ddc8-93ef-4228-aa5b-05989e77b3ac-kube-api-access-q9v7p\") pod \"glance-db-create-sjqqh\" (UID: \"b5c1ddc8-93ef-4228-aa5b-05989e77b3ac\") " pod="openstack/glance-db-create-sjqqh" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.025502 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5c1ddc8-93ef-4228-aa5b-05989e77b3ac-operator-scripts\") pod \"glance-db-create-sjqqh\" (UID: \"b5c1ddc8-93ef-4228-aa5b-05989e77b3ac\") " pod="openstack/glance-db-create-sjqqh" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.083189 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9v7p\" (UniqueName: \"kubernetes.io/projected/b5c1ddc8-93ef-4228-aa5b-05989e77b3ac-kube-api-access-q9v7p\") pod \"glance-db-create-sjqqh\" (UID: \"b5c1ddc8-93ef-4228-aa5b-05989e77b3ac\") " pod="openstack/glance-db-create-sjqqh" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.125696 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vjj4\" (UniqueName: \"kubernetes.io/projected/2964865c-12e5-4d18-bd62-16629f4a1090-kube-api-access-8vjj4\") pod \"glance-1d8b-account-create-update-5d7p8\" (UID: \"2964865c-12e5-4d18-bd62-16629f4a1090\") " pod="openstack/glance-1d8b-account-create-update-5d7p8" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.127676 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2964865c-12e5-4d18-bd62-16629f4a1090-operator-scripts\") pod \"glance-1d8b-account-create-update-5d7p8\" (UID: \"2964865c-12e5-4d18-bd62-16629f4a1090\") " pod="openstack/glance-1d8b-account-create-update-5d7p8" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.128396 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2964865c-12e5-4d18-bd62-16629f4a1090-operator-scripts\") pod \"glance-1d8b-account-create-update-5d7p8\" (UID: \"2964865c-12e5-4d18-bd62-16629f4a1090\") " pod="openstack/glance-1d8b-account-create-update-5d7p8" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.153235 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vjj4\" (UniqueName: \"kubernetes.io/projected/2964865c-12e5-4d18-bd62-16629f4a1090-kube-api-access-8vjj4\") pod \"glance-1d8b-account-create-update-5d7p8\" (UID: \"2964865c-12e5-4d18-bd62-16629f4a1090\") " pod="openstack/glance-1d8b-account-create-update-5d7p8" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.161607 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a81a-account-create-update-482zr"] Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.220143 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sjqqh" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.318659 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d8b-account-create-update-5d7p8" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.479462 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b6676c8-c57e-4081-b77c-47e5a534abb0" path="/var/lib/kubelet/pods/8b6676c8-c57e-4081-b77c-47e5a534abb0/volumes" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.566546 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d7a2-account-create-update-9gqnx"] Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.629204 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="f86a624cad5544fabf3d57d44531b7f2dc3ab563fd259866309dc22a7f70061f" exitCode=1 Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.629274 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"f86a624cad5544fabf3d57d44531b7f2dc3ab563fd259866309dc22a7f70061f"} Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.629302 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"4afc97cc35d2e731a518f6665447d634551954ecb52faa7133808348becf7ec4"} Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.629734 4730 scope.go:117] "RemoveContainer" containerID="c9f4ee519a0ca08568068e912f2c9da4115129c89e9df574ff6ff7f3e8045c1d" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.629819 4730 scope.go:117] "RemoveContainer" containerID="9c15d63ad8f42443e6fb812f50cad6005da98449bf12408c6e6fcc99e744c4a3" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.629914 4730 scope.go:117] "RemoveContainer" containerID="f86a624cad5544fabf3d57d44531b7f2dc3ab563fd259866309dc22a7f70061f" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.632483 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-b7vmm" event={"ID":"b5dc6e44-d1e4-4d5e-a83e-f2223e70f013","Type":"ContainerStarted","Data":"1b1f974a4da052be1b62137faad994d34d2bd00606ed02f18aeb3589a9d62b78"} Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.632512 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-b7vmm" event={"ID":"b5dc6e44-d1e4-4d5e-a83e-f2223e70f013","Type":"ContainerStarted","Data":"3aa2fcd41ce8082c717e11372e41ac0869fba15a988aed80d6acdd225a32ceb9"} Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.637469 4730 generic.go:334] "Generic (PLEG): container finished" podID="d96c3ae0-9cf1-40bf-9ba2-89066c04c975" containerID="3abc831078ca0909ba2a0cc107f5b02749686c97c3a76725bf9d5dd930b49582" exitCode=0 Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.637548 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ctbfr" event={"ID":"d96c3ae0-9cf1-40bf-9ba2-89066c04c975","Type":"ContainerDied","Data":"3abc831078ca0909ba2a0cc107f5b02749686c97c3a76725bf9d5dd930b49582"} Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.652610 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d7a2-account-create-update-9gqnx" event={"ID":"8d28b8af-a349-44fa-8e46-ec5c26389dff","Type":"ContainerStarted","Data":"8fcfa637b2bdd1239c390d373590202c10bd58bc05d341333f159c9de83f948c"} Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.698389 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a81a-account-create-update-482zr" event={"ID":"45c2561b-5ed5-4508-b5a9-b4179c91ac72","Type":"ContainerStarted","Data":"9773ec7d9f8a0b05b588227024fdecbef35b171eef74fcce48fb674c87c0e0b8"} Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.698428 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a81a-account-create-update-482zr" event={"ID":"45c2561b-5ed5-4508-b5a9-b4179c91ac72","Type":"ContainerStarted","Data":"c610265c4bf19eff0889b64149eb24d7f157a37b42005c1069056b6aabd1adde"} Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.724133 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v5qvf"] Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.782163 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-b7vmm" podStartSLOduration=2.78214456 podStartE2EDuration="2.78214456s" podCreationTimestamp="2026-01-31 16:45:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:45:18.73026593 +0000 UTC m=+905.536322846" watchObservedRunningTime="2026-01-31 16:45:18.78214456 +0000 UTC m=+905.588201476" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.819220 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-a81a-account-create-update-482zr" podStartSLOduration=2.819201011 podStartE2EDuration="2.819201011s" podCreationTimestamp="2026-01-31 16:45:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:45:18.810874804 +0000 UTC m=+905.616931720" watchObservedRunningTime="2026-01-31 16:45:18.819201011 +0000 UTC m=+905.625257927" Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.842193 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-sjqqh"] Jan 31 16:45:18 crc kubenswrapper[4730]: I0131 16:45:18.910257 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1d8b-account-create-update-5d7p8"] Jan 31 16:45:18 crc kubenswrapper[4730]: W0131 16:45:18.929322 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2964865c_12e5_4d18_bd62_16629f4a1090.slice/crio-bc703d94157951a719bcd6dd419cbb14452d753f8764fa2a854583010a7b09fb WatchSource:0}: Error finding container bc703d94157951a719bcd6dd419cbb14452d753f8764fa2a854583010a7b09fb: Status 404 returned error can't find the container with id bc703d94157951a719bcd6dd419cbb14452d753f8764fa2a854583010a7b09fb Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.726974 4730 generic.go:334] "Generic (PLEG): container finished" podID="b5dc6e44-d1e4-4d5e-a83e-f2223e70f013" containerID="1b1f974a4da052be1b62137faad994d34d2bd00606ed02f18aeb3589a9d62b78" exitCode=0 Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.727044 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-b7vmm" event={"ID":"b5dc6e44-d1e4-4d5e-a83e-f2223e70f013","Type":"ContainerDied","Data":"1b1f974a4da052be1b62137faad994d34d2bd00606ed02f18aeb3589a9d62b78"} Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.728652 4730 generic.go:334] "Generic (PLEG): container finished" podID="2964865c-12e5-4d18-bd62-16629f4a1090" containerID="68b0d561dbc914741e6f1e7c54792963052193dfea00eb9eb40b4b446131d9b1" exitCode=0 Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.728713 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d8b-account-create-update-5d7p8" event={"ID":"2964865c-12e5-4d18-bd62-16629f4a1090","Type":"ContainerDied","Data":"68b0d561dbc914741e6f1e7c54792963052193dfea00eb9eb40b4b446131d9b1"} Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.728730 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d8b-account-create-update-5d7p8" event={"ID":"2964865c-12e5-4d18-bd62-16629f4a1090","Type":"ContainerStarted","Data":"bc703d94157951a719bcd6dd419cbb14452d753f8764fa2a854583010a7b09fb"} Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.730019 4730 generic.go:334] "Generic (PLEG): container finished" podID="8d28b8af-a349-44fa-8e46-ec5c26389dff" containerID="4bf47bf5d412ac417c8e5e5795018bddf82c37a4882326f8403dcd690825a72b" exitCode=0 Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.730078 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d7a2-account-create-update-9gqnx" event={"ID":"8d28b8af-a349-44fa-8e46-ec5c26389dff","Type":"ContainerDied","Data":"4bf47bf5d412ac417c8e5e5795018bddf82c37a4882326f8403dcd690825a72b"} Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.731229 4730 generic.go:334] "Generic (PLEG): container finished" podID="45c2561b-5ed5-4508-b5a9-b4179c91ac72" containerID="9773ec7d9f8a0b05b588227024fdecbef35b171eef74fcce48fb674c87c0e0b8" exitCode=0 Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.731282 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a81a-account-create-update-482zr" event={"ID":"45c2561b-5ed5-4508-b5a9-b4179c91ac72","Type":"ContainerDied","Data":"9773ec7d9f8a0b05b588227024fdecbef35b171eef74fcce48fb674c87c0e0b8"} Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.733034 4730 generic.go:334] "Generic (PLEG): container finished" podID="31af8919-7a56-4384-9ee9-edf256738e2d" containerID="21c4985406e9b3864e245ea03fb6ba6e3887ad59b19bf9cb146fdb5156ad45eb" exitCode=0 Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.733125 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v5qvf" event={"ID":"31af8919-7a56-4384-9ee9-edf256738e2d","Type":"ContainerDied","Data":"21c4985406e9b3864e245ea03fb6ba6e3887ad59b19bf9cb146fdb5156ad45eb"} Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.733144 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v5qvf" event={"ID":"31af8919-7a56-4384-9ee9-edf256738e2d","Type":"ContainerStarted","Data":"45a3ba8656c3a9c7784b9f275ab80c76c880c0fc63e10a6c6f3a4c3a998cac60"} Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.737545 4730 generic.go:334] "Generic (PLEG): container finished" podID="b5c1ddc8-93ef-4228-aa5b-05989e77b3ac" containerID="130fc790319cb61672dbcf7fc52cf14bcfced6c2addeafce6e91ee87e759514c" exitCode=0 Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.737604 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sjqqh" event={"ID":"b5c1ddc8-93ef-4228-aa5b-05989e77b3ac","Type":"ContainerDied","Data":"130fc790319cb61672dbcf7fc52cf14bcfced6c2addeafce6e91ee87e759514c"} Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.737625 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sjqqh" event={"ID":"b5c1ddc8-93ef-4228-aa5b-05989e77b3ac","Type":"ContainerStarted","Data":"66e7365415a78da58f560ad942f29fc59a55d003f355fe3d7bd369b6d6d51f49"} Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.748200 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="bc5b31d8e552e7d705f3847a601eb6a6cdd43104cb139f5fefea06f83f7019fb" exitCode=1 Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.748276 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"b05d384f284938e62a50508baa50781abb2b371b6922f2e11d344f430b2b032d"} Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.748313 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"b79ccc8f9f8687f81b72396372015d0c3b088360a39f057a123836960c51f360"} Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.748323 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"bc5b31d8e552e7d705f3847a601eb6a6cdd43104cb139f5fefea06f83f7019fb"} Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.748347 4730 scope.go:117] "RemoveContainer" containerID="c9f4ee519a0ca08568068e912f2c9da4115129c89e9df574ff6ff7f3e8045c1d" Jan 31 16:45:19 crc kubenswrapper[4730]: I0131 16:45:19.749059 4730 scope.go:117] "RemoveContainer" containerID="bc5b31d8e552e7d705f3847a601eb6a6cdd43104cb139f5fefea06f83f7019fb" Jan 31 16:45:19 crc kubenswrapper[4730]: E0131 16:45:19.749424 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.419849 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ctbfr" Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.486104 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d96c3ae0-9cf1-40bf-9ba2-89066c04c975-operator-scripts\") pod \"d96c3ae0-9cf1-40bf-9ba2-89066c04c975\" (UID: \"d96c3ae0-9cf1-40bf-9ba2-89066c04c975\") " Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.486269 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkfxs\" (UniqueName: \"kubernetes.io/projected/d96c3ae0-9cf1-40bf-9ba2-89066c04c975-kube-api-access-mkfxs\") pod \"d96c3ae0-9cf1-40bf-9ba2-89066c04c975\" (UID: \"d96c3ae0-9cf1-40bf-9ba2-89066c04c975\") " Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.487576 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d96c3ae0-9cf1-40bf-9ba2-89066c04c975-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d96c3ae0-9cf1-40bf-9ba2-89066c04c975" (UID: "d96c3ae0-9cf1-40bf-9ba2-89066c04c975"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.491055 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d96c3ae0-9cf1-40bf-9ba2-89066c04c975-kube-api-access-mkfxs" (OuterVolumeSpecName: "kube-api-access-mkfxs") pod "d96c3ae0-9cf1-40bf-9ba2-89066c04c975" (UID: "d96c3ae0-9cf1-40bf-9ba2-89066c04c975"). InnerVolumeSpecName "kube-api-access-mkfxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.588378 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkfxs\" (UniqueName: \"kubernetes.io/projected/d96c3ae0-9cf1-40bf-9ba2-89066c04c975-kube-api-access-mkfxs\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.588419 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d96c3ae0-9cf1-40bf-9ba2-89066c04c975-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.689531 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:20 crc kubenswrapper[4730]: E0131 16:45:20.689703 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 16:45:20 crc kubenswrapper[4730]: E0131 16:45:20.689753 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 16:45:36.68973888 +0000 UTC m=+923.495795796 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.761904 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="b05d384f284938e62a50508baa50781abb2b371b6922f2e11d344f430b2b032d" exitCode=1 Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.761946 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="b79ccc8f9f8687f81b72396372015d0c3b088360a39f057a123836960c51f360" exitCode=1 Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.761996 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"b05d384f284938e62a50508baa50781abb2b371b6922f2e11d344f430b2b032d"} Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.762030 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"b79ccc8f9f8687f81b72396372015d0c3b088360a39f057a123836960c51f360"} Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.762053 4730 scope.go:117] "RemoveContainer" containerID="f86a624cad5544fabf3d57d44531b7f2dc3ab563fd259866309dc22a7f70061f" Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.762873 4730 scope.go:117] "RemoveContainer" containerID="bc5b31d8e552e7d705f3847a601eb6a6cdd43104cb139f5fefea06f83f7019fb" Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.762942 4730 scope.go:117] "RemoveContainer" containerID="b79ccc8f9f8687f81b72396372015d0c3b088360a39f057a123836960c51f360" Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.763097 4730 scope.go:117] "RemoveContainer" containerID="b05d384f284938e62a50508baa50781abb2b371b6922f2e11d344f430b2b032d" Jan 31 16:45:20 crc kubenswrapper[4730]: E0131 16:45:20.763456 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.765970 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ctbfr" Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.767624 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ctbfr" event={"ID":"d96c3ae0-9cf1-40bf-9ba2-89066c04c975","Type":"ContainerDied","Data":"7d51a9681830051359ecad90c6a40a1e6b37a71982e3cda18b4453c103e7be6e"} Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.767667 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d51a9681830051359ecad90c6a40a1e6b37a71982e3cda18b4453c103e7be6e" Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.829421 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ctbfr"] Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.835987 4730 status_manager.go:907] "Failed to delete status for pod" pod="openstack/root-account-create-update-ctbfr" err="pods \"root-account-create-update-ctbfr\" not found" Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.836675 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ctbfr"] Jan 31 16:45:20 crc kubenswrapper[4730]: I0131 16:45:20.840497 4730 scope.go:117] "RemoveContainer" containerID="9c15d63ad8f42443e6fb812f50cad6005da98449bf12408c6e6fcc99e744c4a3" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.265125 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a81a-account-create-update-482zr" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.298554 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c2561b-5ed5-4508-b5a9-b4179c91ac72-operator-scripts\") pod \"45c2561b-5ed5-4508-b5a9-b4179c91ac72\" (UID: \"45c2561b-5ed5-4508-b5a9-b4179c91ac72\") " Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.298651 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfvqh\" (UniqueName: \"kubernetes.io/projected/45c2561b-5ed5-4508-b5a9-b4179c91ac72-kube-api-access-sfvqh\") pod \"45c2561b-5ed5-4508-b5a9-b4179c91ac72\" (UID: \"45c2561b-5ed5-4508-b5a9-b4179c91ac72\") " Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.299065 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45c2561b-5ed5-4508-b5a9-b4179c91ac72-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45c2561b-5ed5-4508-b5a9-b4179c91ac72" (UID: "45c2561b-5ed5-4508-b5a9-b4179c91ac72"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.299771 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c2561b-5ed5-4508-b5a9-b4179c91ac72-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.322986 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45c2561b-5ed5-4508-b5a9-b4179c91ac72-kube-api-access-sfvqh" (OuterVolumeSpecName: "kube-api-access-sfvqh") pod "45c2561b-5ed5-4508-b5a9-b4179c91ac72" (UID: "45c2561b-5ed5-4508-b5a9-b4179c91ac72"). InnerVolumeSpecName "kube-api-access-sfvqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.372378 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d8b-account-create-update-5d7p8" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.401599 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2964865c-12e5-4d18-bd62-16629f4a1090-operator-scripts\") pod \"2964865c-12e5-4d18-bd62-16629f4a1090\" (UID: \"2964865c-12e5-4d18-bd62-16629f4a1090\") " Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.401705 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vjj4\" (UniqueName: \"kubernetes.io/projected/2964865c-12e5-4d18-bd62-16629f4a1090-kube-api-access-8vjj4\") pod \"2964865c-12e5-4d18-bd62-16629f4a1090\" (UID: \"2964865c-12e5-4d18-bd62-16629f4a1090\") " Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.402061 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfvqh\" (UniqueName: \"kubernetes.io/projected/45c2561b-5ed5-4508-b5a9-b4179c91ac72-kube-api-access-sfvqh\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.402717 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2964865c-12e5-4d18-bd62-16629f4a1090-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2964865c-12e5-4d18-bd62-16629f4a1090" (UID: "2964865c-12e5-4d18-bd62-16629f4a1090"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.405199 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2964865c-12e5-4d18-bd62-16629f4a1090-kube-api-access-8vjj4" (OuterVolumeSpecName: "kube-api-access-8vjj4") pod "2964865c-12e5-4d18-bd62-16629f4a1090" (UID: "2964865c-12e5-4d18-bd62-16629f4a1090"). InnerVolumeSpecName "kube-api-access-8vjj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.439518 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sjqqh" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.439666 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-b7vmm" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.473967 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d7a2-account-create-update-9gqnx" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.505678 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5c1ddc8-93ef-4228-aa5b-05989e77b3ac-operator-scripts\") pod \"b5c1ddc8-93ef-4228-aa5b-05989e77b3ac\" (UID: \"b5c1ddc8-93ef-4228-aa5b-05989e77b3ac\") " Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.505817 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dccn6\" (UniqueName: \"kubernetes.io/projected/b5dc6e44-d1e4-4d5e-a83e-f2223e70f013-kube-api-access-dccn6\") pod \"b5dc6e44-d1e4-4d5e-a83e-f2223e70f013\" (UID: \"b5dc6e44-d1e4-4d5e-a83e-f2223e70f013\") " Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.505856 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl8tp\" (UniqueName: \"kubernetes.io/projected/8d28b8af-a349-44fa-8e46-ec5c26389dff-kube-api-access-jl8tp\") pod \"8d28b8af-a349-44fa-8e46-ec5c26389dff\" (UID: \"8d28b8af-a349-44fa-8e46-ec5c26389dff\") " Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.506063 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d28b8af-a349-44fa-8e46-ec5c26389dff-operator-scripts\") pod \"8d28b8af-a349-44fa-8e46-ec5c26389dff\" (UID: \"8d28b8af-a349-44fa-8e46-ec5c26389dff\") " Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.506107 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5dc6e44-d1e4-4d5e-a83e-f2223e70f013-operator-scripts\") pod \"b5dc6e44-d1e4-4d5e-a83e-f2223e70f013\" (UID: \"b5dc6e44-d1e4-4d5e-a83e-f2223e70f013\") " Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.506139 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9v7p\" (UniqueName: \"kubernetes.io/projected/b5c1ddc8-93ef-4228-aa5b-05989e77b3ac-kube-api-access-q9v7p\") pod \"b5c1ddc8-93ef-4228-aa5b-05989e77b3ac\" (UID: \"b5c1ddc8-93ef-4228-aa5b-05989e77b3ac\") " Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.506282 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5c1ddc8-93ef-4228-aa5b-05989e77b3ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b5c1ddc8-93ef-4228-aa5b-05989e77b3ac" (UID: "b5c1ddc8-93ef-4228-aa5b-05989e77b3ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.506544 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5dc6e44-d1e4-4d5e-a83e-f2223e70f013-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b5dc6e44-d1e4-4d5e-a83e-f2223e70f013" (UID: "b5dc6e44-d1e4-4d5e-a83e-f2223e70f013"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.506706 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d28b8af-a349-44fa-8e46-ec5c26389dff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8d28b8af-a349-44fa-8e46-ec5c26389dff" (UID: "8d28b8af-a349-44fa-8e46-ec5c26389dff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.506994 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vjj4\" (UniqueName: \"kubernetes.io/projected/2964865c-12e5-4d18-bd62-16629f4a1090-kube-api-access-8vjj4\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.507006 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d28b8af-a349-44fa-8e46-ec5c26389dff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.507033 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5dc6e44-d1e4-4d5e-a83e-f2223e70f013-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.507044 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2964865c-12e5-4d18-bd62-16629f4a1090-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.507052 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5c1ddc8-93ef-4228-aa5b-05989e77b3ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.508672 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5dc6e44-d1e4-4d5e-a83e-f2223e70f013-kube-api-access-dccn6" (OuterVolumeSpecName: "kube-api-access-dccn6") pod "b5dc6e44-d1e4-4d5e-a83e-f2223e70f013" (UID: "b5dc6e44-d1e4-4d5e-a83e-f2223e70f013"). InnerVolumeSpecName "kube-api-access-dccn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.511963 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c1ddc8-93ef-4228-aa5b-05989e77b3ac-kube-api-access-q9v7p" (OuterVolumeSpecName: "kube-api-access-q9v7p") pod "b5c1ddc8-93ef-4228-aa5b-05989e77b3ac" (UID: "b5c1ddc8-93ef-4228-aa5b-05989e77b3ac"). InnerVolumeSpecName "kube-api-access-q9v7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.517682 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d28b8af-a349-44fa-8e46-ec5c26389dff-kube-api-access-jl8tp" (OuterVolumeSpecName: "kube-api-access-jl8tp") pod "8d28b8af-a349-44fa-8e46-ec5c26389dff" (UID: "8d28b8af-a349-44fa-8e46-ec5c26389dff"). InnerVolumeSpecName "kube-api-access-jl8tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.530778 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v5qvf" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.607549 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptpqm\" (UniqueName: \"kubernetes.io/projected/31af8919-7a56-4384-9ee9-edf256738e2d-kube-api-access-ptpqm\") pod \"31af8919-7a56-4384-9ee9-edf256738e2d\" (UID: \"31af8919-7a56-4384-9ee9-edf256738e2d\") " Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.607639 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31af8919-7a56-4384-9ee9-edf256738e2d-operator-scripts\") pod \"31af8919-7a56-4384-9ee9-edf256738e2d\" (UID: \"31af8919-7a56-4384-9ee9-edf256738e2d\") " Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.607926 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9v7p\" (UniqueName: \"kubernetes.io/projected/b5c1ddc8-93ef-4228-aa5b-05989e77b3ac-kube-api-access-q9v7p\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.607944 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dccn6\" (UniqueName: \"kubernetes.io/projected/b5dc6e44-d1e4-4d5e-a83e-f2223e70f013-kube-api-access-dccn6\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.607953 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl8tp\" (UniqueName: \"kubernetes.io/projected/8d28b8af-a349-44fa-8e46-ec5c26389dff-kube-api-access-jl8tp\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.608100 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31af8919-7a56-4384-9ee9-edf256738e2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31af8919-7a56-4384-9ee9-edf256738e2d" (UID: "31af8919-7a56-4384-9ee9-edf256738e2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.610731 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31af8919-7a56-4384-9ee9-edf256738e2d-kube-api-access-ptpqm" (OuterVolumeSpecName: "kube-api-access-ptpqm") pod "31af8919-7a56-4384-9ee9-edf256738e2d" (UID: "31af8919-7a56-4384-9ee9-edf256738e2d"). InnerVolumeSpecName "kube-api-access-ptpqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.709619 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptpqm\" (UniqueName: \"kubernetes.io/projected/31af8919-7a56-4384-9ee9-edf256738e2d-kube-api-access-ptpqm\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.709646 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31af8919-7a56-4384-9ee9-edf256738e2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.774226 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a81a-account-create-update-482zr" event={"ID":"45c2561b-5ed5-4508-b5a9-b4179c91ac72","Type":"ContainerDied","Data":"c610265c4bf19eff0889b64149eb24d7f157a37b42005c1069056b6aabd1adde"} Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.774269 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c610265c4bf19eff0889b64149eb24d7f157a37b42005c1069056b6aabd1adde" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.774335 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a81a-account-create-update-482zr" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.779840 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v5qvf" event={"ID":"31af8919-7a56-4384-9ee9-edf256738e2d","Type":"ContainerDied","Data":"45a3ba8656c3a9c7784b9f275ab80c76c880c0fc63e10a6c6f3a4c3a998cac60"} Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.779879 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45a3ba8656c3a9c7784b9f275ab80c76c880c0fc63e10a6c6f3a4c3a998cac60" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.779858 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v5qvf" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.781217 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sjqqh" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.781243 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sjqqh" event={"ID":"b5c1ddc8-93ef-4228-aa5b-05989e77b3ac","Type":"ContainerDied","Data":"66e7365415a78da58f560ad942f29fc59a55d003f355fe3d7bd369b6d6d51f49"} Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.781270 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66e7365415a78da58f560ad942f29fc59a55d003f355fe3d7bd369b6d6d51f49" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.782336 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-b7vmm" event={"ID":"b5dc6e44-d1e4-4d5e-a83e-f2223e70f013","Type":"ContainerDied","Data":"3aa2fcd41ce8082c717e11372e41ac0869fba15a988aed80d6acdd225a32ceb9"} Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.782364 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aa2fcd41ce8082c717e11372e41ac0869fba15a988aed80d6acdd225a32ceb9" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.782408 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-b7vmm" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.802935 4730 scope.go:117] "RemoveContainer" containerID="bc5b31d8e552e7d705f3847a601eb6a6cdd43104cb139f5fefea06f83f7019fb" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.803015 4730 scope.go:117] "RemoveContainer" containerID="b79ccc8f9f8687f81b72396372015d0c3b088360a39f057a123836960c51f360" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.803097 4730 scope.go:117] "RemoveContainer" containerID="b05d384f284938e62a50508baa50781abb2b371b6922f2e11d344f430b2b032d" Jan 31 16:45:21 crc kubenswrapper[4730]: E0131 16:45:21.803337 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.808617 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d8b-account-create-update-5d7p8" event={"ID":"2964865c-12e5-4d18-bd62-16629f4a1090","Type":"ContainerDied","Data":"bc703d94157951a719bcd6dd419cbb14452d753f8764fa2a854583010a7b09fb"} Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.808649 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc703d94157951a719bcd6dd419cbb14452d753f8764fa2a854583010a7b09fb" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.808702 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d8b-account-create-update-5d7p8" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.814773 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d7a2-account-create-update-9gqnx" event={"ID":"8d28b8af-a349-44fa-8e46-ec5c26389dff","Type":"ContainerDied","Data":"8fcfa637b2bdd1239c390d373590202c10bd58bc05d341333f159c9de83f948c"} Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.814823 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fcfa637b2bdd1239c390d373590202c10bd58bc05d341333f159c9de83f948c" Jan 31 16:45:21 crc kubenswrapper[4730]: I0131 16:45:21.814893 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d7a2-account-create-update-9gqnx" Jan 31 16:45:22 crc kubenswrapper[4730]: I0131 16:45:22.321509 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x6zhm" Jan 31 16:45:22 crc kubenswrapper[4730]: I0131 16:45:22.472489 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d96c3ae0-9cf1-40bf-9ba2-89066c04c975" path="/var/lib/kubelet/pods/d96c3ae0-9cf1-40bf-9ba2-89066c04c975/volumes" Jan 31 16:45:22 crc kubenswrapper[4730]: I0131 16:45:22.540710 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6zhm"] Jan 31 16:45:22 crc kubenswrapper[4730]: I0131 16:45:22.822094 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x6zhm" podUID="a0d12ea3-22b7-4e96-9a8e-102e6473918c" containerName="registry-server" containerID="cri-o://7d3a448ecfabcc16daeeab165d3fe37632b4d23de729337366f9054520ae2722" gracePeriod=2 Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.207389 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-5vxrp"] Jan 31 16:45:23 crc kubenswrapper[4730]: E0131 16:45:23.207682 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31af8919-7a56-4384-9ee9-edf256738e2d" containerName="mariadb-database-create" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.207697 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="31af8919-7a56-4384-9ee9-edf256738e2d" containerName="mariadb-database-create" Jan 31 16:45:23 crc kubenswrapper[4730]: E0131 16:45:23.207708 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c2561b-5ed5-4508-b5a9-b4179c91ac72" containerName="mariadb-account-create-update" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.207714 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c2561b-5ed5-4508-b5a9-b4179c91ac72" containerName="mariadb-account-create-update" Jan 31 16:45:23 crc kubenswrapper[4730]: E0131 16:45:23.207727 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5dc6e44-d1e4-4d5e-a83e-f2223e70f013" containerName="mariadb-database-create" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.207733 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5dc6e44-d1e4-4d5e-a83e-f2223e70f013" containerName="mariadb-database-create" Jan 31 16:45:23 crc kubenswrapper[4730]: E0131 16:45:23.207748 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5c1ddc8-93ef-4228-aa5b-05989e77b3ac" containerName="mariadb-database-create" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.207753 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c1ddc8-93ef-4228-aa5b-05989e77b3ac" containerName="mariadb-database-create" Jan 31 16:45:23 crc kubenswrapper[4730]: E0131 16:45:23.207769 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d28b8af-a349-44fa-8e46-ec5c26389dff" containerName="mariadb-account-create-update" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.207775 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d28b8af-a349-44fa-8e46-ec5c26389dff" containerName="mariadb-account-create-update" Jan 31 16:45:23 crc kubenswrapper[4730]: E0131 16:45:23.207785 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96c3ae0-9cf1-40bf-9ba2-89066c04c975" containerName="mariadb-account-create-update" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.207791 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96c3ae0-9cf1-40bf-9ba2-89066c04c975" containerName="mariadb-account-create-update" Jan 31 16:45:23 crc kubenswrapper[4730]: E0131 16:45:23.207813 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2964865c-12e5-4d18-bd62-16629f4a1090" containerName="mariadb-account-create-update" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.207820 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2964865c-12e5-4d18-bd62-16629f4a1090" containerName="mariadb-account-create-update" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.207976 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c2561b-5ed5-4508-b5a9-b4179c91ac72" containerName="mariadb-account-create-update" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.207987 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5c1ddc8-93ef-4228-aa5b-05989e77b3ac" containerName="mariadb-database-create" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.207995 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2964865c-12e5-4d18-bd62-16629f4a1090" containerName="mariadb-account-create-update" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.208004 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5dc6e44-d1e4-4d5e-a83e-f2223e70f013" containerName="mariadb-database-create" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.208014 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="31af8919-7a56-4384-9ee9-edf256738e2d" containerName="mariadb-database-create" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.208024 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d28b8af-a349-44fa-8e46-ec5c26389dff" containerName="mariadb-account-create-update" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.208033 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d96c3ae0-9cf1-40bf-9ba2-89066c04c975" containerName="mariadb-account-create-update" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.208500 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5vxrp" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.213098 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-w5ds8" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.213138 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.225711 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5vxrp"] Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.336348 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-db-sync-config-data\") pod \"glance-db-sync-5vxrp\" (UID: \"627cf9cc-1e11-455d-b186-f159d4eed39c\") " pod="openstack/glance-db-sync-5vxrp" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.336582 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-config-data\") pod \"glance-db-sync-5vxrp\" (UID: \"627cf9cc-1e11-455d-b186-f159d4eed39c\") " pod="openstack/glance-db-sync-5vxrp" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.336873 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-combined-ca-bundle\") pod \"glance-db-sync-5vxrp\" (UID: \"627cf9cc-1e11-455d-b186-f159d4eed39c\") " pod="openstack/glance-db-sync-5vxrp" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.337043 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmpp5\" (UniqueName: \"kubernetes.io/projected/627cf9cc-1e11-455d-b186-f159d4eed39c-kube-api-access-tmpp5\") pod \"glance-db-sync-5vxrp\" (UID: \"627cf9cc-1e11-455d-b186-f159d4eed39c\") " pod="openstack/glance-db-sync-5vxrp" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.342775 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x6zhm" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.438064 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0d12ea3-22b7-4e96-9a8e-102e6473918c-catalog-content\") pod \"a0d12ea3-22b7-4e96-9a8e-102e6473918c\" (UID: \"a0d12ea3-22b7-4e96-9a8e-102e6473918c\") " Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.438171 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0d12ea3-22b7-4e96-9a8e-102e6473918c-utilities\") pod \"a0d12ea3-22b7-4e96-9a8e-102e6473918c\" (UID: \"a0d12ea3-22b7-4e96-9a8e-102e6473918c\") " Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.438320 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnxr4\" (UniqueName: \"kubernetes.io/projected/a0d12ea3-22b7-4e96-9a8e-102e6473918c-kube-api-access-mnxr4\") pod \"a0d12ea3-22b7-4e96-9a8e-102e6473918c\" (UID: \"a0d12ea3-22b7-4e96-9a8e-102e6473918c\") " Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.438516 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-config-data\") pod \"glance-db-sync-5vxrp\" (UID: \"627cf9cc-1e11-455d-b186-f159d4eed39c\") " pod="openstack/glance-db-sync-5vxrp" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.438579 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-combined-ca-bundle\") pod \"glance-db-sync-5vxrp\" (UID: \"627cf9cc-1e11-455d-b186-f159d4eed39c\") " pod="openstack/glance-db-sync-5vxrp" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.438609 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmpp5\" (UniqueName: \"kubernetes.io/projected/627cf9cc-1e11-455d-b186-f159d4eed39c-kube-api-access-tmpp5\") pod \"glance-db-sync-5vxrp\" (UID: \"627cf9cc-1e11-455d-b186-f159d4eed39c\") " pod="openstack/glance-db-sync-5vxrp" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.438652 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-db-sync-config-data\") pod \"glance-db-sync-5vxrp\" (UID: \"627cf9cc-1e11-455d-b186-f159d4eed39c\") " pod="openstack/glance-db-sync-5vxrp" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.446825 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-config-data\") pod \"glance-db-sync-5vxrp\" (UID: \"627cf9cc-1e11-455d-b186-f159d4eed39c\") " pod="openstack/glance-db-sync-5vxrp" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.448222 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0d12ea3-22b7-4e96-9a8e-102e6473918c-utilities" (OuterVolumeSpecName: "utilities") pod "a0d12ea3-22b7-4e96-9a8e-102e6473918c" (UID: "a0d12ea3-22b7-4e96-9a8e-102e6473918c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.448316 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0d12ea3-22b7-4e96-9a8e-102e6473918c-kube-api-access-mnxr4" (OuterVolumeSpecName: "kube-api-access-mnxr4") pod "a0d12ea3-22b7-4e96-9a8e-102e6473918c" (UID: "a0d12ea3-22b7-4e96-9a8e-102e6473918c"). InnerVolumeSpecName "kube-api-access-mnxr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.448995 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-db-sync-config-data\") pod \"glance-db-sync-5vxrp\" (UID: \"627cf9cc-1e11-455d-b186-f159d4eed39c\") " pod="openstack/glance-db-sync-5vxrp" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.453746 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-combined-ca-bundle\") pod \"glance-db-sync-5vxrp\" (UID: \"627cf9cc-1e11-455d-b186-f159d4eed39c\") " pod="openstack/glance-db-sync-5vxrp" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.460623 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmpp5\" (UniqueName: \"kubernetes.io/projected/627cf9cc-1e11-455d-b186-f159d4eed39c-kube-api-access-tmpp5\") pod \"glance-db-sync-5vxrp\" (UID: \"627cf9cc-1e11-455d-b186-f159d4eed39c\") " pod="openstack/glance-db-sync-5vxrp" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.474672 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0d12ea3-22b7-4e96-9a8e-102e6473918c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0d12ea3-22b7-4e96-9a8e-102e6473918c" (UID: "a0d12ea3-22b7-4e96-9a8e-102e6473918c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.526145 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gbpkm" podUID="1b59c538-9f79-4e4e-9d74-6eb5f1758795" containerName="ovn-controller" probeResult="failure" output=< Jan 31 16:45:23 crc kubenswrapper[4730]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 16:45:23 crc kubenswrapper[4730]: > Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.527446 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5vxrp" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.540251 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0d12ea3-22b7-4e96-9a8e-102e6473918c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.540278 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnxr4\" (UniqueName: \"kubernetes.io/projected/a0d12ea3-22b7-4e96-9a8e-102e6473918c-kube-api-access-mnxr4\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.540287 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0d12ea3-22b7-4e96-9a8e-102e6473918c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.547633 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.554146 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-88h7f" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.834190 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gbpkm-config-6slkg"] Jan 31 16:45:23 crc kubenswrapper[4730]: E0131 16:45:23.834481 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d12ea3-22b7-4e96-9a8e-102e6473918c" containerName="registry-server" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.834493 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d12ea3-22b7-4e96-9a8e-102e6473918c" containerName="registry-server" Jan 31 16:45:23 crc kubenswrapper[4730]: E0131 16:45:23.834515 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d12ea3-22b7-4e96-9a8e-102e6473918c" containerName="extract-utilities" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.834521 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d12ea3-22b7-4e96-9a8e-102e6473918c" containerName="extract-utilities" Jan 31 16:45:23 crc kubenswrapper[4730]: E0131 16:45:23.834538 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d12ea3-22b7-4e96-9a8e-102e6473918c" containerName="extract-content" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.834546 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d12ea3-22b7-4e96-9a8e-102e6473918c" containerName="extract-content" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.834722 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0d12ea3-22b7-4e96-9a8e-102e6473918c" containerName="registry-server" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.835227 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.838466 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.839844 4730 generic.go:334] "Generic (PLEG): container finished" podID="a0d12ea3-22b7-4e96-9a8e-102e6473918c" containerID="7d3a448ecfabcc16daeeab165d3fe37632b4d23de729337366f9054520ae2722" exitCode=0 Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.839942 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x6zhm" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.839983 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6zhm" event={"ID":"a0d12ea3-22b7-4e96-9a8e-102e6473918c","Type":"ContainerDied","Data":"7d3a448ecfabcc16daeeab165d3fe37632b4d23de729337366f9054520ae2722"} Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.840011 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6zhm" event={"ID":"a0d12ea3-22b7-4e96-9a8e-102e6473918c","Type":"ContainerDied","Data":"fda5a09476ce3c4014808dd193ebad3422ec3702f067385cdbbbfd847a19afc6"} Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.840030 4730 scope.go:117] "RemoveContainer" containerID="7d3a448ecfabcc16daeeab165d3fe37632b4d23de729337366f9054520ae2722" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.860352 4730 scope.go:117] "RemoveContainer" containerID="4ce22db72fd68f081b18f38d33da58e1a581b8c9432572070834b71013c8c3ab" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.890350 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6zhm"] Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.898227 4730 scope.go:117] "RemoveContainer" containerID="946ef030de80241a3391fd0525a4951ccfed3c6479f50e4a1fc39f20f7c3f269" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.904344 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6zhm"] Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.922062 4730 scope.go:117] "RemoveContainer" containerID="7d3a448ecfabcc16daeeab165d3fe37632b4d23de729337366f9054520ae2722" Jan 31 16:45:23 crc kubenswrapper[4730]: E0131 16:45:23.927369 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d3a448ecfabcc16daeeab165d3fe37632b4d23de729337366f9054520ae2722\": container with ID starting with 7d3a448ecfabcc16daeeab165d3fe37632b4d23de729337366f9054520ae2722 not found: ID does not exist" containerID="7d3a448ecfabcc16daeeab165d3fe37632b4d23de729337366f9054520ae2722" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.927408 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d3a448ecfabcc16daeeab165d3fe37632b4d23de729337366f9054520ae2722"} err="failed to get container status \"7d3a448ecfabcc16daeeab165d3fe37632b4d23de729337366f9054520ae2722\": rpc error: code = NotFound desc = could not find container \"7d3a448ecfabcc16daeeab165d3fe37632b4d23de729337366f9054520ae2722\": container with ID starting with 7d3a448ecfabcc16daeeab165d3fe37632b4d23de729337366f9054520ae2722 not found: ID does not exist" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.927435 4730 scope.go:117] "RemoveContainer" containerID="4ce22db72fd68f081b18f38d33da58e1a581b8c9432572070834b71013c8c3ab" Jan 31 16:45:23 crc kubenswrapper[4730]: E0131 16:45:23.927974 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ce22db72fd68f081b18f38d33da58e1a581b8c9432572070834b71013c8c3ab\": container with ID starting with 4ce22db72fd68f081b18f38d33da58e1a581b8c9432572070834b71013c8c3ab not found: ID does not exist" containerID="4ce22db72fd68f081b18f38d33da58e1a581b8c9432572070834b71013c8c3ab" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.928017 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ce22db72fd68f081b18f38d33da58e1a581b8c9432572070834b71013c8c3ab"} err="failed to get container status \"4ce22db72fd68f081b18f38d33da58e1a581b8c9432572070834b71013c8c3ab\": rpc error: code = NotFound desc = could not find container \"4ce22db72fd68f081b18f38d33da58e1a581b8c9432572070834b71013c8c3ab\": container with ID starting with 4ce22db72fd68f081b18f38d33da58e1a581b8c9432572070834b71013c8c3ab not found: ID does not exist" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.928043 4730 scope.go:117] "RemoveContainer" containerID="946ef030de80241a3391fd0525a4951ccfed3c6479f50e4a1fc39f20f7c3f269" Jan 31 16:45:23 crc kubenswrapper[4730]: E0131 16:45:23.928360 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"946ef030de80241a3391fd0525a4951ccfed3c6479f50e4a1fc39f20f7c3f269\": container with ID starting with 946ef030de80241a3391fd0525a4951ccfed3c6479f50e4a1fc39f20f7c3f269 not found: ID does not exist" containerID="946ef030de80241a3391fd0525a4951ccfed3c6479f50e4a1fc39f20f7c3f269" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.928477 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"946ef030de80241a3391fd0525a4951ccfed3c6479f50e4a1fc39f20f7c3f269"} err="failed to get container status \"946ef030de80241a3391fd0525a4951ccfed3c6479f50e4a1fc39f20f7c3f269\": rpc error: code = NotFound desc = could not find container \"946ef030de80241a3391fd0525a4951ccfed3c6479f50e4a1fc39f20f7c3f269\": container with ID starting with 946ef030de80241a3391fd0525a4951ccfed3c6479f50e4a1fc39f20f7c3f269 not found: ID does not exist" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.945618 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltnzk\" (UniqueName: \"kubernetes.io/projected/4113a122-df4a-4358-b4b6-b3a9fb18a640-kube-api-access-ltnzk\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.945678 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-run-ovn\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.945758 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4113a122-df4a-4358-b4b6-b3a9fb18a640-additional-scripts\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.945786 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-run\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.945842 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4113a122-df4a-4358-b4b6-b3a9fb18a640-scripts\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.946649 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-log-ovn\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:23 crc kubenswrapper[4730]: I0131 16:45:23.976109 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gbpkm-config-6slkg"] Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.048559 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-run\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.048907 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4113a122-df4a-4358-b4b6-b3a9fb18a640-scripts\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.048859 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-run\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.051002 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4113a122-df4a-4358-b4b6-b3a9fb18a640-scripts\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.051067 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-log-ovn\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.051158 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-log-ovn\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.051273 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltnzk\" (UniqueName: \"kubernetes.io/projected/4113a122-df4a-4358-b4b6-b3a9fb18a640-kube-api-access-ltnzk\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.051301 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-run-ovn\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.051567 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-run-ovn\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.051627 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4113a122-df4a-4358-b4b6-b3a9fb18a640-additional-scripts\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.052065 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4113a122-df4a-4358-b4b6-b3a9fb18a640-additional-scripts\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.072643 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltnzk\" (UniqueName: \"kubernetes.io/projected/4113a122-df4a-4358-b4b6-b3a9fb18a640-kube-api-access-ltnzk\") pod \"ovn-controller-gbpkm-config-6slkg\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.161979 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.210357 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.94:5671: connect: connection refused" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.370635 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5vxrp"] Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.475078 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0d12ea3-22b7-4e96-9a8e-102e6473918c" path="/var/lib/kubelet/pods/a0d12ea3-22b7-4e96-9a8e-102e6473918c/volumes" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.512288 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wlptt"] Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.513194 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wlptt" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.520030 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wlptt"] Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.522285 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.663990 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e-operator-scripts\") pod \"root-account-create-update-wlptt\" (UID: \"5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e\") " pod="openstack/root-account-create-update-wlptt" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.664133 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hrrv\" (UniqueName: \"kubernetes.io/projected/5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e-kube-api-access-7hrrv\") pod \"root-account-create-update-wlptt\" (UID: \"5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e\") " pod="openstack/root-account-create-update-wlptt" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.666251 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gbpkm-config-6slkg"] Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.765759 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hrrv\" (UniqueName: \"kubernetes.io/projected/5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e-kube-api-access-7hrrv\") pod \"root-account-create-update-wlptt\" (UID: \"5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e\") " pod="openstack/root-account-create-update-wlptt" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.765847 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e-operator-scripts\") pod \"root-account-create-update-wlptt\" (UID: \"5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e\") " pod="openstack/root-account-create-update-wlptt" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.766526 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e-operator-scripts\") pod \"root-account-create-update-wlptt\" (UID: \"5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e\") " pod="openstack/root-account-create-update-wlptt" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.785563 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hrrv\" (UniqueName: \"kubernetes.io/projected/5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e-kube-api-access-7hrrv\") pod \"root-account-create-update-wlptt\" (UID: \"5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e\") " pod="openstack/root-account-create-update-wlptt" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.842044 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wlptt" Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.849318 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gbpkm-config-6slkg" event={"ID":"4113a122-df4a-4358-b4b6-b3a9fb18a640","Type":"ContainerStarted","Data":"05c5db6d247faf61d9d2c02b5ce1a8ab2eeff0944b4e9793ad8ecb251832bd5c"} Jan 31 16:45:24 crc kubenswrapper[4730]: I0131 16:45:24.851734 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5vxrp" event={"ID":"627cf9cc-1e11-455d-b186-f159d4eed39c","Type":"ContainerStarted","Data":"da08308eb3541103a04531ab5cec124e93d64d789abf08c7735f189f28ac38a5"} Jan 31 16:45:25 crc kubenswrapper[4730]: I0131 16:45:25.346217 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wlptt"] Jan 31 16:45:25 crc kubenswrapper[4730]: W0131 16:45:25.359464 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fe236ab_9fd5_43a6_9ed8_242f4c6dbb1e.slice/crio-d18e80867cf1af5d57a3ec1170c391530af3ad50355fb74766f090db1bce115e WatchSource:0}: Error finding container d18e80867cf1af5d57a3ec1170c391530af3ad50355fb74766f090db1bce115e: Status 404 returned error can't find the container with id d18e80867cf1af5d57a3ec1170c391530af3ad50355fb74766f090db1bce115e Jan 31 16:45:25 crc kubenswrapper[4730]: I0131 16:45:25.557262 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-66hvq" podUID="a81eb20f-04f9-4f66-b19a-19cd06c28329" containerName="registry-server" probeResult="failure" output=< Jan 31 16:45:25 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Jan 31 16:45:25 crc kubenswrapper[4730]: > Jan 31 16:45:25 crc kubenswrapper[4730]: I0131 16:45:25.866513 4730 generic.go:334] "Generic (PLEG): container finished" podID="4113a122-df4a-4358-b4b6-b3a9fb18a640" containerID="4a7632f37124c859c197ad647098e2a83a4abbf9eda430770abc4c6188d37eeb" exitCode=0 Jan 31 16:45:25 crc kubenswrapper[4730]: I0131 16:45:25.866562 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gbpkm-config-6slkg" event={"ID":"4113a122-df4a-4358-b4b6-b3a9fb18a640","Type":"ContainerDied","Data":"4a7632f37124c859c197ad647098e2a83a4abbf9eda430770abc4c6188d37eeb"} Jan 31 16:45:25 crc kubenswrapper[4730]: I0131 16:45:25.868137 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wlptt" event={"ID":"5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e","Type":"ContainerStarted","Data":"085f5a53443c1d1c759ba38149fcc00cd96b2894963f317fd1038c518df3cdc2"} Jan 31 16:45:25 crc kubenswrapper[4730]: I0131 16:45:25.868173 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wlptt" event={"ID":"5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e","Type":"ContainerStarted","Data":"d18e80867cf1af5d57a3ec1170c391530af3ad50355fb74766f090db1bce115e"} Jan 31 16:45:25 crc kubenswrapper[4730]: I0131 16:45:25.917138 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-wlptt" podStartSLOduration=1.9171230289999999 podStartE2EDuration="1.917123029s" podCreationTimestamp="2026-01-31 16:45:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:45:25.911955776 +0000 UTC m=+912.718012692" watchObservedRunningTime="2026-01-31 16:45:25.917123029 +0000 UTC m=+912.723179945" Jan 31 16:45:26 crc kubenswrapper[4730]: I0131 16:45:26.877778 4730 generic.go:334] "Generic (PLEG): container finished" podID="5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e" containerID="085f5a53443c1d1c759ba38149fcc00cd96b2894963f317fd1038c518df3cdc2" exitCode=0 Jan 31 16:45:26 crc kubenswrapper[4730]: I0131 16:45:26.877934 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wlptt" event={"ID":"5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e","Type":"ContainerDied","Data":"085f5a53443c1d1c759ba38149fcc00cd96b2894963f317fd1038c518df3cdc2"} Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.197861 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.321754 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-run\") pod \"4113a122-df4a-4358-b4b6-b3a9fb18a640\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.321843 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-log-ovn\") pod \"4113a122-df4a-4358-b4b6-b3a9fb18a640\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.321873 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-run" (OuterVolumeSpecName: "var-run") pod "4113a122-df4a-4358-b4b6-b3a9fb18a640" (UID: "4113a122-df4a-4358-b4b6-b3a9fb18a640"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.321910 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltnzk\" (UniqueName: \"kubernetes.io/projected/4113a122-df4a-4358-b4b6-b3a9fb18a640-kube-api-access-ltnzk\") pod \"4113a122-df4a-4358-b4b6-b3a9fb18a640\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.321921 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4113a122-df4a-4358-b4b6-b3a9fb18a640" (UID: "4113a122-df4a-4358-b4b6-b3a9fb18a640"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.321948 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-run-ovn\") pod \"4113a122-df4a-4358-b4b6-b3a9fb18a640\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.322019 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4113a122-df4a-4358-b4b6-b3a9fb18a640-additional-scripts\") pod \"4113a122-df4a-4358-b4b6-b3a9fb18a640\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.322056 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4113a122-df4a-4358-b4b6-b3a9fb18a640-scripts\") pod \"4113a122-df4a-4358-b4b6-b3a9fb18a640\" (UID: \"4113a122-df4a-4358-b4b6-b3a9fb18a640\") " Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.322213 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4113a122-df4a-4358-b4b6-b3a9fb18a640" (UID: "4113a122-df4a-4358-b4b6-b3a9fb18a640"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.322397 4730 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-run\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.322412 4730 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.322421 4730 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4113a122-df4a-4358-b4b6-b3a9fb18a640-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.322679 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4113a122-df4a-4358-b4b6-b3a9fb18a640-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4113a122-df4a-4358-b4b6-b3a9fb18a640" (UID: "4113a122-df4a-4358-b4b6-b3a9fb18a640"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.323080 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4113a122-df4a-4358-b4b6-b3a9fb18a640-scripts" (OuterVolumeSpecName: "scripts") pod "4113a122-df4a-4358-b4b6-b3a9fb18a640" (UID: "4113a122-df4a-4358-b4b6-b3a9fb18a640"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.338910 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4113a122-df4a-4358-b4b6-b3a9fb18a640-kube-api-access-ltnzk" (OuterVolumeSpecName: "kube-api-access-ltnzk") pod "4113a122-df4a-4358-b4b6-b3a9fb18a640" (UID: "4113a122-df4a-4358-b4b6-b3a9fb18a640"). InnerVolumeSpecName "kube-api-access-ltnzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.423734 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltnzk\" (UniqueName: \"kubernetes.io/projected/4113a122-df4a-4358-b4b6-b3a9fb18a640-kube-api-access-ltnzk\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.423764 4730 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4113a122-df4a-4358-b4b6-b3a9fb18a640-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.423775 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4113a122-df4a-4358-b4b6-b3a9fb18a640-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.894274 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gbpkm-config-6slkg" Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.894439 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gbpkm-config-6slkg" event={"ID":"4113a122-df4a-4358-b4b6-b3a9fb18a640","Type":"ContainerDied","Data":"05c5db6d247faf61d9d2c02b5ce1a8ab2eeff0944b4e9793ad8ecb251832bd5c"} Jan 31 16:45:27 crc kubenswrapper[4730]: I0131 16:45:27.895076 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05c5db6d247faf61d9d2c02b5ce1a8ab2eeff0944b4e9793ad8ecb251832bd5c" Jan 31 16:45:28 crc kubenswrapper[4730]: I0131 16:45:28.251529 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wlptt" Jan 31 16:45:28 crc kubenswrapper[4730]: I0131 16:45:28.313161 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gbpkm-config-6slkg"] Jan 31 16:45:28 crc kubenswrapper[4730]: I0131 16:45:28.316407 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gbpkm-config-6slkg"] Jan 31 16:45:28 crc kubenswrapper[4730]: I0131 16:45:28.341316 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e-operator-scripts\") pod \"5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e\" (UID: \"5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e\") " Jan 31 16:45:28 crc kubenswrapper[4730]: I0131 16:45:28.342001 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e" (UID: "5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:28 crc kubenswrapper[4730]: I0131 16:45:28.342051 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hrrv\" (UniqueName: \"kubernetes.io/projected/5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e-kube-api-access-7hrrv\") pod \"5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e\" (UID: \"5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e\") " Jan 31 16:45:28 crc kubenswrapper[4730]: I0131 16:45:28.342346 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:28 crc kubenswrapper[4730]: I0131 16:45:28.347361 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e-kube-api-access-7hrrv" (OuterVolumeSpecName: "kube-api-access-7hrrv") pod "5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e" (UID: "5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e"). InnerVolumeSpecName "kube-api-access-7hrrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:28 crc kubenswrapper[4730]: I0131 16:45:28.442902 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hrrv\" (UniqueName: \"kubernetes.io/projected/5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e-kube-api-access-7hrrv\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:28 crc kubenswrapper[4730]: I0131 16:45:28.473243 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4113a122-df4a-4358-b4b6-b3a9fb18a640" path="/var/lib/kubelet/pods/4113a122-df4a-4358-b4b6-b3a9fb18a640/volumes" Jan 31 16:45:28 crc kubenswrapper[4730]: I0131 16:45:28.506681 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-gbpkm" Jan 31 16:45:28 crc kubenswrapper[4730]: I0131 16:45:28.903788 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wlptt" event={"ID":"5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e","Type":"ContainerDied","Data":"d18e80867cf1af5d57a3ec1170c391530af3ad50355fb74766f090db1bce115e"} Jan 31 16:45:28 crc kubenswrapper[4730]: I0131 16:45:28.903847 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d18e80867cf1af5d57a3ec1170c391530af3ad50355fb74766f090db1bce115e" Jan 31 16:45:28 crc kubenswrapper[4730]: I0131 16:45:28.903896 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wlptt" Jan 31 16:45:30 crc kubenswrapper[4730]: I0131 16:45:30.821118 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wlptt"] Jan 31 16:45:30 crc kubenswrapper[4730]: I0131 16:45:30.826714 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wlptt"] Jan 31 16:45:32 crc kubenswrapper[4730]: I0131 16:45:32.475584 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e" path="/var/lib/kubelet/pods/5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e/volumes" Jan 31 16:45:33 crc kubenswrapper[4730]: I0131 16:45:33.463859 4730 scope.go:117] "RemoveContainer" containerID="bc5b31d8e552e7d705f3847a601eb6a6cdd43104cb139f5fefea06f83f7019fb" Jan 31 16:45:33 crc kubenswrapper[4730]: I0131 16:45:33.463927 4730 scope.go:117] "RemoveContainer" containerID="b79ccc8f9f8687f81b72396372015d0c3b088360a39f057a123836960c51f360" Jan 31 16:45:33 crc kubenswrapper[4730]: I0131 16:45:33.464009 4730 scope.go:117] "RemoveContainer" containerID="b05d384f284938e62a50508baa50781abb2b371b6922f2e11d344f430b2b032d" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.207931 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.588035 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.672220 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-v46sw"] Jan 31 16:45:34 crc kubenswrapper[4730]: E0131 16:45:34.672530 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4113a122-df4a-4358-b4b6-b3a9fb18a640" containerName="ovn-config" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.672545 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4113a122-df4a-4358-b4b6-b3a9fb18a640" containerName="ovn-config" Jan 31 16:45:34 crc kubenswrapper[4730]: E0131 16:45:34.672558 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e" containerName="mariadb-account-create-update" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.672564 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e" containerName="mariadb-account-create-update" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.672706 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="4113a122-df4a-4358-b4b6-b3a9fb18a640" containerName="ovn-config" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.672735 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe236ab-9fd5-43a6-9ed8-242f4c6dbb1e" containerName="mariadb-account-create-update" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.673246 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v46sw" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.707957 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-1dce-account-create-update-2crhq"] Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.709341 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1dce-account-create-update-2crhq" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.716922 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.720573 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-v46sw"] Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.754147 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1dce-account-create-update-2crhq"] Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.799647 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b163a61-8109-4989-ada6-8e408c05448d-operator-scripts\") pod \"barbican-1dce-account-create-update-2crhq\" (UID: \"6b163a61-8109-4989-ada6-8e408c05448d\") " pod="openstack/barbican-1dce-account-create-update-2crhq" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.800962 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkxvh\" (UniqueName: \"kubernetes.io/projected/6b163a61-8109-4989-ada6-8e408c05448d-kube-api-access-bkxvh\") pod \"barbican-1dce-account-create-update-2crhq\" (UID: \"6b163a61-8109-4989-ada6-8e408c05448d\") " pod="openstack/barbican-1dce-account-create-update-2crhq" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.801015 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48dba275-7242-434b-b55e-1c62a25c7c1a-operator-scripts\") pod \"cinder-db-create-v46sw\" (UID: \"48dba275-7242-434b-b55e-1c62a25c7c1a\") " pod="openstack/cinder-db-create-v46sw" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.801093 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwwb6\" (UniqueName: \"kubernetes.io/projected/48dba275-7242-434b-b55e-1c62a25c7c1a-kube-api-access-lwwb6\") pod \"cinder-db-create-v46sw\" (UID: \"48dba275-7242-434b-b55e-1c62a25c7c1a\") " pod="openstack/cinder-db-create-v46sw" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.864522 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2b0d-account-create-update-rh8s6"] Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.865606 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2b0d-account-create-update-rh8s6" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.877037 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-l4plp"] Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.877581 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.878174 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l4plp" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.882641 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-l4plp"] Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.893077 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2b0d-account-create-update-rh8s6"] Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.907602 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwwb6\" (UniqueName: \"kubernetes.io/projected/48dba275-7242-434b-b55e-1c62a25c7c1a-kube-api-access-lwwb6\") pod \"cinder-db-create-v46sw\" (UID: \"48dba275-7242-434b-b55e-1c62a25c7c1a\") " pod="openstack/cinder-db-create-v46sw" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.907728 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b163a61-8109-4989-ada6-8e408c05448d-operator-scripts\") pod \"barbican-1dce-account-create-update-2crhq\" (UID: \"6b163a61-8109-4989-ada6-8e408c05448d\") " pod="openstack/barbican-1dce-account-create-update-2crhq" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.907752 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkxvh\" (UniqueName: \"kubernetes.io/projected/6b163a61-8109-4989-ada6-8e408c05448d-kube-api-access-bkxvh\") pod \"barbican-1dce-account-create-update-2crhq\" (UID: \"6b163a61-8109-4989-ada6-8e408c05448d\") " pod="openstack/barbican-1dce-account-create-update-2crhq" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.907771 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48dba275-7242-434b-b55e-1c62a25c7c1a-operator-scripts\") pod \"cinder-db-create-v46sw\" (UID: \"48dba275-7242-434b-b55e-1c62a25c7c1a\") " pod="openstack/cinder-db-create-v46sw" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.908466 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48dba275-7242-434b-b55e-1c62a25c7c1a-operator-scripts\") pod \"cinder-db-create-v46sw\" (UID: \"48dba275-7242-434b-b55e-1c62a25c7c1a\") " pod="openstack/cinder-db-create-v46sw" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.909408 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b163a61-8109-4989-ada6-8e408c05448d-operator-scripts\") pod \"barbican-1dce-account-create-update-2crhq\" (UID: \"6b163a61-8109-4989-ada6-8e408c05448d\") " pod="openstack/barbican-1dce-account-create-update-2crhq" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.967844 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwwb6\" (UniqueName: \"kubernetes.io/projected/48dba275-7242-434b-b55e-1c62a25c7c1a-kube-api-access-lwwb6\") pod \"cinder-db-create-v46sw\" (UID: \"48dba275-7242-434b-b55e-1c62a25c7c1a\") " pod="openstack/cinder-db-create-v46sw" Jan 31 16:45:34 crc kubenswrapper[4730]: I0131 16:45:34.968419 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkxvh\" (UniqueName: \"kubernetes.io/projected/6b163a61-8109-4989-ada6-8e408c05448d-kube-api-access-bkxvh\") pod \"barbican-1dce-account-create-update-2crhq\" (UID: \"6b163a61-8109-4989-ada6-8e408c05448d\") " pod="openstack/barbican-1dce-account-create-update-2crhq" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.001460 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4dcfm"] Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.002555 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4dcfm" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.010357 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4dcfm"] Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.011341 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.011521 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.011637 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-n4fjp" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.011749 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.016592 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9f2bffc-75d1-4da3-be48-728edaf3e0be-operator-scripts\") pod \"barbican-db-create-l4plp\" (UID: \"d9f2bffc-75d1-4da3-be48-728edaf3e0be\") " pod="openstack/barbican-db-create-l4plp" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.016849 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhn9g\" (UniqueName: \"kubernetes.io/projected/d9f2bffc-75d1-4da3-be48-728edaf3e0be-kube-api-access-bhn9g\") pod \"barbican-db-create-l4plp\" (UID: \"d9f2bffc-75d1-4da3-be48-728edaf3e0be\") " pod="openstack/barbican-db-create-l4plp" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.016960 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v46sw" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.016961 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kts5z\" (UniqueName: \"kubernetes.io/projected/2111311f-b72a-4c59-84a4-4c97bfa06105-kube-api-access-kts5z\") pod \"cinder-2b0d-account-create-update-rh8s6\" (UID: \"2111311f-b72a-4c59-84a4-4c97bfa06105\") " pod="openstack/cinder-2b0d-account-create-update-rh8s6" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.017165 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2111311f-b72a-4c59-84a4-4c97bfa06105-operator-scripts\") pod \"cinder-2b0d-account-create-update-rh8s6\" (UID: \"2111311f-b72a-4c59-84a4-4c97bfa06105\") " pod="openstack/cinder-2b0d-account-create-update-rh8s6" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.039816 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1dce-account-create-update-2crhq" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.101630 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-ltdm6"] Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.102966 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ltdm6" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.118945 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kts5z\" (UniqueName: \"kubernetes.io/projected/2111311f-b72a-4c59-84a4-4c97bfa06105-kube-api-access-kts5z\") pod \"cinder-2b0d-account-create-update-rh8s6\" (UID: \"2111311f-b72a-4c59-84a4-4c97bfa06105\") " pod="openstack/cinder-2b0d-account-create-update-rh8s6" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.119185 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-combined-ca-bundle\") pod \"keystone-db-sync-4dcfm\" (UID: \"97354e1f-e4b3-4f45-a9f6-58d1932e9f45\") " pod="openstack/keystone-db-sync-4dcfm" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.119270 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2111311f-b72a-4c59-84a4-4c97bfa06105-operator-scripts\") pod \"cinder-2b0d-account-create-update-rh8s6\" (UID: \"2111311f-b72a-4c59-84a4-4c97bfa06105\") " pod="openstack/cinder-2b0d-account-create-update-rh8s6" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.119984 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s96p\" (UniqueName: \"kubernetes.io/projected/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-kube-api-access-5s96p\") pod \"keystone-db-sync-4dcfm\" (UID: \"97354e1f-e4b3-4f45-a9f6-58d1932e9f45\") " pod="openstack/keystone-db-sync-4dcfm" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.124744 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9f2bffc-75d1-4da3-be48-728edaf3e0be-operator-scripts\") pod \"barbican-db-create-l4plp\" (UID: \"d9f2bffc-75d1-4da3-be48-728edaf3e0be\") " pod="openstack/barbican-db-create-l4plp" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.124922 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-config-data\") pod \"keystone-db-sync-4dcfm\" (UID: \"97354e1f-e4b3-4f45-a9f6-58d1932e9f45\") " pod="openstack/keystone-db-sync-4dcfm" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.125189 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhn9g\" (UniqueName: \"kubernetes.io/projected/d9f2bffc-75d1-4da3-be48-728edaf3e0be-kube-api-access-bhn9g\") pod \"barbican-db-create-l4plp\" (UID: \"d9f2bffc-75d1-4da3-be48-728edaf3e0be\") " pod="openstack/barbican-db-create-l4plp" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.120703 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2111311f-b72a-4c59-84a4-4c97bfa06105-operator-scripts\") pod \"cinder-2b0d-account-create-update-rh8s6\" (UID: \"2111311f-b72a-4c59-84a4-4c97bfa06105\") " pod="openstack/cinder-2b0d-account-create-update-rh8s6" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.126232 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9f2bffc-75d1-4da3-be48-728edaf3e0be-operator-scripts\") pod \"barbican-db-create-l4plp\" (UID: \"d9f2bffc-75d1-4da3-be48-728edaf3e0be\") " pod="openstack/barbican-db-create-l4plp" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.150961 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ltdm6"] Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.172071 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhn9g\" (UniqueName: \"kubernetes.io/projected/d9f2bffc-75d1-4da3-be48-728edaf3e0be-kube-api-access-bhn9g\") pod \"barbican-db-create-l4plp\" (UID: \"d9f2bffc-75d1-4da3-be48-728edaf3e0be\") " pod="openstack/barbican-db-create-l4plp" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.174488 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kts5z\" (UniqueName: \"kubernetes.io/projected/2111311f-b72a-4c59-84a4-4c97bfa06105-kube-api-access-kts5z\") pod \"cinder-2b0d-account-create-update-rh8s6\" (UID: \"2111311f-b72a-4c59-84a4-4c97bfa06105\") " pod="openstack/cinder-2b0d-account-create-update-rh8s6" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.181645 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2b0d-account-create-update-rh8s6" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.197253 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l4plp" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.200104 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-547f-account-create-update-4pbzk"] Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.201182 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-547f-account-create-update-4pbzk" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.210780 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.227167 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c3c71a4-f2bf-46da-9c7d-c7c4dba19585-operator-scripts\") pod \"neutron-db-create-ltdm6\" (UID: \"1c3c71a4-f2bf-46da-9c7d-c7c4dba19585\") " pod="openstack/neutron-db-create-ltdm6" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.227284 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-combined-ca-bundle\") pod \"keystone-db-sync-4dcfm\" (UID: \"97354e1f-e4b3-4f45-a9f6-58d1932e9f45\") " pod="openstack/keystone-db-sync-4dcfm" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.227363 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvjnl\" (UniqueName: \"kubernetes.io/projected/1c3c71a4-f2bf-46da-9c7d-c7c4dba19585-kube-api-access-pvjnl\") pod \"neutron-db-create-ltdm6\" (UID: \"1c3c71a4-f2bf-46da-9c7d-c7c4dba19585\") " pod="openstack/neutron-db-create-ltdm6" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.227417 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s96p\" (UniqueName: \"kubernetes.io/projected/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-kube-api-access-5s96p\") pod \"keystone-db-sync-4dcfm\" (UID: \"97354e1f-e4b3-4f45-a9f6-58d1932e9f45\") " pod="openstack/keystone-db-sync-4dcfm" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.227446 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-config-data\") pod \"keystone-db-sync-4dcfm\" (UID: \"97354e1f-e4b3-4f45-a9f6-58d1932e9f45\") " pod="openstack/keystone-db-sync-4dcfm" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.234689 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-combined-ca-bundle\") pod \"keystone-db-sync-4dcfm\" (UID: \"97354e1f-e4b3-4f45-a9f6-58d1932e9f45\") " pod="openstack/keystone-db-sync-4dcfm" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.235791 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-config-data\") pod \"keystone-db-sync-4dcfm\" (UID: \"97354e1f-e4b3-4f45-a9f6-58d1932e9f45\") " pod="openstack/keystone-db-sync-4dcfm" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.254972 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-547f-account-create-update-4pbzk"] Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.270198 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s96p\" (UniqueName: \"kubernetes.io/projected/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-kube-api-access-5s96p\") pod \"keystone-db-sync-4dcfm\" (UID: \"97354e1f-e4b3-4f45-a9f6-58d1932e9f45\") " pod="openstack/keystone-db-sync-4dcfm" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.327264 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4dcfm" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.328816 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvjnl\" (UniqueName: \"kubernetes.io/projected/1c3c71a4-f2bf-46da-9c7d-c7c4dba19585-kube-api-access-pvjnl\") pod \"neutron-db-create-ltdm6\" (UID: \"1c3c71a4-f2bf-46da-9c7d-c7c4dba19585\") " pod="openstack/neutron-db-create-ltdm6" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.329028 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c3c71a4-f2bf-46da-9c7d-c7c4dba19585-operator-scripts\") pod \"neutron-db-create-ltdm6\" (UID: \"1c3c71a4-f2bf-46da-9c7d-c7c4dba19585\") " pod="openstack/neutron-db-create-ltdm6" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.329141 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/892cfb30-014c-4cdf-8822-dbcbe7dea46c-operator-scripts\") pod \"neutron-547f-account-create-update-4pbzk\" (UID: \"892cfb30-014c-4cdf-8822-dbcbe7dea46c\") " pod="openstack/neutron-547f-account-create-update-4pbzk" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.329230 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l2d8\" (UniqueName: \"kubernetes.io/projected/892cfb30-014c-4cdf-8822-dbcbe7dea46c-kube-api-access-8l2d8\") pod \"neutron-547f-account-create-update-4pbzk\" (UID: \"892cfb30-014c-4cdf-8822-dbcbe7dea46c\") " pod="openstack/neutron-547f-account-create-update-4pbzk" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.330043 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c3c71a4-f2bf-46da-9c7d-c7c4dba19585-operator-scripts\") pod \"neutron-db-create-ltdm6\" (UID: \"1c3c71a4-f2bf-46da-9c7d-c7c4dba19585\") " pod="openstack/neutron-db-create-ltdm6" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.365905 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvjnl\" (UniqueName: \"kubernetes.io/projected/1c3c71a4-f2bf-46da-9c7d-c7c4dba19585-kube-api-access-pvjnl\") pod \"neutron-db-create-ltdm6\" (UID: \"1c3c71a4-f2bf-46da-9c7d-c7c4dba19585\") " pod="openstack/neutron-db-create-ltdm6" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.431102 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/892cfb30-014c-4cdf-8822-dbcbe7dea46c-operator-scripts\") pod \"neutron-547f-account-create-update-4pbzk\" (UID: \"892cfb30-014c-4cdf-8822-dbcbe7dea46c\") " pod="openstack/neutron-547f-account-create-update-4pbzk" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.431147 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l2d8\" (UniqueName: \"kubernetes.io/projected/892cfb30-014c-4cdf-8822-dbcbe7dea46c-kube-api-access-8l2d8\") pod \"neutron-547f-account-create-update-4pbzk\" (UID: \"892cfb30-014c-4cdf-8822-dbcbe7dea46c\") " pod="openstack/neutron-547f-account-create-update-4pbzk" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.431822 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/892cfb30-014c-4cdf-8822-dbcbe7dea46c-operator-scripts\") pod \"neutron-547f-account-create-update-4pbzk\" (UID: \"892cfb30-014c-4cdf-8822-dbcbe7dea46c\") " pod="openstack/neutron-547f-account-create-update-4pbzk" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.439832 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ltdm6" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.454620 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l2d8\" (UniqueName: \"kubernetes.io/projected/892cfb30-014c-4cdf-8822-dbcbe7dea46c-kube-api-access-8l2d8\") pod \"neutron-547f-account-create-update-4pbzk\" (UID: \"892cfb30-014c-4cdf-8822-dbcbe7dea46c\") " pod="openstack/neutron-547f-account-create-update-4pbzk" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.609540 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-66hvq" podUID="a81eb20f-04f9-4f66-b19a-19cd06c28329" containerName="registry-server" probeResult="failure" output=< Jan 31 16:45:35 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Jan 31 16:45:35 crc kubenswrapper[4730]: > Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.611988 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-547f-account-create-update-4pbzk" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.861702 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lv6bn"] Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.863290 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lv6bn" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.865162 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.874688 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lv6bn"] Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.938211 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpqqf\" (UniqueName: \"kubernetes.io/projected/c8fb99c8-b28a-450a-8692-e585216fbc53-kube-api-access-dpqqf\") pod \"root-account-create-update-lv6bn\" (UID: \"c8fb99c8-b28a-450a-8692-e585216fbc53\") " pod="openstack/root-account-create-update-lv6bn" Jan 31 16:45:35 crc kubenswrapper[4730]: I0131 16:45:35.938313 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8fb99c8-b28a-450a-8692-e585216fbc53-operator-scripts\") pod \"root-account-create-update-lv6bn\" (UID: \"c8fb99c8-b28a-450a-8692-e585216fbc53\") " pod="openstack/root-account-create-update-lv6bn" Jan 31 16:45:36 crc kubenswrapper[4730]: I0131 16:45:36.040639 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpqqf\" (UniqueName: \"kubernetes.io/projected/c8fb99c8-b28a-450a-8692-e585216fbc53-kube-api-access-dpqqf\") pod \"root-account-create-update-lv6bn\" (UID: \"c8fb99c8-b28a-450a-8692-e585216fbc53\") " pod="openstack/root-account-create-update-lv6bn" Jan 31 16:45:36 crc kubenswrapper[4730]: I0131 16:45:36.040729 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8fb99c8-b28a-450a-8692-e585216fbc53-operator-scripts\") pod \"root-account-create-update-lv6bn\" (UID: \"c8fb99c8-b28a-450a-8692-e585216fbc53\") " pod="openstack/root-account-create-update-lv6bn" Jan 31 16:45:36 crc kubenswrapper[4730]: I0131 16:45:36.041398 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8fb99c8-b28a-450a-8692-e585216fbc53-operator-scripts\") pod \"root-account-create-update-lv6bn\" (UID: \"c8fb99c8-b28a-450a-8692-e585216fbc53\") " pod="openstack/root-account-create-update-lv6bn" Jan 31 16:45:36 crc kubenswrapper[4730]: I0131 16:45:36.058459 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpqqf\" (UniqueName: \"kubernetes.io/projected/c8fb99c8-b28a-450a-8692-e585216fbc53-kube-api-access-dpqqf\") pod \"root-account-create-update-lv6bn\" (UID: \"c8fb99c8-b28a-450a-8692-e585216fbc53\") " pod="openstack/root-account-create-update-lv6bn" Jan 31 16:45:36 crc kubenswrapper[4730]: I0131 16:45:36.179129 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lv6bn" Jan 31 16:45:36 crc kubenswrapper[4730]: E0131 16:45:36.752327 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 16:45:36 crc kubenswrapper[4730]: E0131 16:45:36.752412 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 16:46:08.752394605 +0000 UTC m=+955.558451521 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 16:45:36 crc kubenswrapper[4730]: I0131 16:45:36.752441 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:45:41 crc kubenswrapper[4730]: I0131 16:45:41.032754 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"786f8582b1d464af042106b58dc4a961d37e50defef7db41bb247eaa82ebf765"} Jan 31 16:45:41 crc kubenswrapper[4730]: I0131 16:45:41.154621 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1dce-account-create-update-2crhq"] Jan 31 16:45:41 crc kubenswrapper[4730]: I0131 16:45:41.181018 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2b0d-account-create-update-rh8s6"] Jan 31 16:45:41 crc kubenswrapper[4730]: I0131 16:45:41.597206 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4dcfm"] Jan 31 16:45:41 crc kubenswrapper[4730]: W0131 16:45:41.601031 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97354e1f_e4b3_4f45_a9f6_58d1932e9f45.slice/crio-a70290e67b17609f1283b739183b4f34d9f8bb657295e289e09bc4f1ec3a7675 WatchSource:0}: Error finding container a70290e67b17609f1283b739183b4f34d9f8bb657295e289e09bc4f1ec3a7675: Status 404 returned error can't find the container with id a70290e67b17609f1283b739183b4f34d9f8bb657295e289e09bc4f1ec3a7675 Jan 31 16:45:41 crc kubenswrapper[4730]: I0131 16:45:41.625829 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lv6bn"] Jan 31 16:45:41 crc kubenswrapper[4730]: W0131 16:45:41.635342 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8fb99c8_b28a_450a_8692_e585216fbc53.slice/crio-348d8b26136a20110c1fcda6250b8af0bb98d1c467d7d177a22a58bdc604615e WatchSource:0}: Error finding container 348d8b26136a20110c1fcda6250b8af0bb98d1c467d7d177a22a58bdc604615e: Status 404 returned error can't find the container with id 348d8b26136a20110c1fcda6250b8af0bb98d1c467d7d177a22a58bdc604615e Jan 31 16:45:41 crc kubenswrapper[4730]: I0131 16:45:41.638774 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-v46sw"] Jan 31 16:45:41 crc kubenswrapper[4730]: I0131 16:45:41.655329 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ltdm6"] Jan 31 16:45:41 crc kubenswrapper[4730]: I0131 16:45:41.662792 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-l4plp"] Jan 31 16:45:41 crc kubenswrapper[4730]: I0131 16:45:41.683712 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-547f-account-create-update-4pbzk"] Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.063935 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5vxrp" event={"ID":"627cf9cc-1e11-455d-b186-f159d4eed39c","Type":"ContainerStarted","Data":"4f07dcc150b2023774fde7bb4915ca967d8b8644c88104b9125b3fac66c92813"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.071042 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lv6bn" event={"ID":"c8fb99c8-b28a-450a-8692-e585216fbc53","Type":"ContainerStarted","Data":"3f4d0a9e999cf1d51a29a8d9e0aab5d18604850cecc4322a909b0ebe4fdeb3ad"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.071282 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lv6bn" event={"ID":"c8fb99c8-b28a-450a-8692-e585216fbc53","Type":"ContainerStarted","Data":"348d8b26136a20110c1fcda6250b8af0bb98d1c467d7d177a22a58bdc604615e"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.081468 4730 generic.go:334] "Generic (PLEG): container finished" podID="6b163a61-8109-4989-ada6-8e408c05448d" containerID="9062dbede20d796ce638c375a0c9eb1a4f176849690456f200fcdb19c576f593" exitCode=0 Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.081708 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1dce-account-create-update-2crhq" event={"ID":"6b163a61-8109-4989-ada6-8e408c05448d","Type":"ContainerDied","Data":"9062dbede20d796ce638c375a0c9eb1a4f176849690456f200fcdb19c576f593"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.081848 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1dce-account-create-update-2crhq" event={"ID":"6b163a61-8109-4989-ada6-8e408c05448d","Type":"ContainerStarted","Data":"326c410925fa0e8175cfa9e5c9615eb92393e8dfe033ca21285003a9569de13a"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.084706 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-5vxrp" podStartSLOduration=2.797596875 podStartE2EDuration="19.084694376s" podCreationTimestamp="2026-01-31 16:45:23 +0000 UTC" firstStartedPulling="2026-01-31 16:45:24.399336028 +0000 UTC m=+911.205392944" lastFinishedPulling="2026-01-31 16:45:40.686433529 +0000 UTC m=+927.492490445" observedRunningTime="2026-01-31 16:45:42.079459651 +0000 UTC m=+928.885516567" watchObservedRunningTime="2026-01-31 16:45:42.084694376 +0000 UTC m=+928.890751292" Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.099131 4730 generic.go:334] "Generic (PLEG): container finished" podID="2111311f-b72a-4c59-84a4-4c97bfa06105" containerID="419be2ea72d4aaae301c506a03356a157440daa24c27e7cc315f008cb5342da8" exitCode=0 Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.099223 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2b0d-account-create-update-rh8s6" event={"ID":"2111311f-b72a-4c59-84a4-4c97bfa06105","Type":"ContainerDied","Data":"419be2ea72d4aaae301c506a03356a157440daa24c27e7cc315f008cb5342da8"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.099251 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2b0d-account-create-update-rh8s6" event={"ID":"2111311f-b72a-4c59-84a4-4c97bfa06105","Type":"ContainerStarted","Data":"b1f12de80a0c3d57b8755c477ae372f5b7177b900b6144e597a993852aef4c85"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.105779 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-lv6bn" podStartSLOduration=7.10576516 podStartE2EDuration="7.10576516s" podCreationTimestamp="2026-01-31 16:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:45:42.103642215 +0000 UTC m=+928.909699131" watchObservedRunningTime="2026-01-31 16:45:42.10576516 +0000 UTC m=+928.911822076" Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.131046 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-547f-account-create-update-4pbzk" event={"ID":"892cfb30-014c-4cdf-8822-dbcbe7dea46c","Type":"ContainerStarted","Data":"415c363beca5282f0080d3caf153f786edadf0d8213ad3a4683cf2a16c0bce64"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.131264 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-547f-account-create-update-4pbzk" event={"ID":"892cfb30-014c-4cdf-8822-dbcbe7dea46c","Type":"ContainerStarted","Data":"5f5801f9608edc9a6263a52a093fd8185a628c8b22c6d77b43f059ff4a5a3707"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.176441 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-547f-account-create-update-4pbzk" podStartSLOduration=7.176225408 podStartE2EDuration="7.176225408s" podCreationTimestamp="2026-01-31 16:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:45:42.16350123 +0000 UTC m=+928.969574726" watchObservedRunningTime="2026-01-31 16:45:42.176225408 +0000 UTC m=+928.982282324" Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.196029 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="786f8582b1d464af042106b58dc4a961d37e50defef7db41bb247eaa82ebf765" exitCode=1 Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.196078 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="2102570acd0d4063edb8ff73bbc2db62d76245e54759273f9b6b29e86aa93a9b" exitCode=1 Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.196178 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"786f8582b1d464af042106b58dc4a961d37e50defef7db41bb247eaa82ebf765"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.196207 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"97a0f22ff3ede34052fb983fbbdc8c26473187f948f5a0bcbbcd93e6b7bb8326"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.196218 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"2102570acd0d4063edb8ff73bbc2db62d76245e54759273f9b6b29e86aa93a9b"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.196235 4730 scope.go:117] "RemoveContainer" containerID="bc5b31d8e552e7d705f3847a601eb6a6cdd43104cb139f5fefea06f83f7019fb" Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.196901 4730 scope.go:117] "RemoveContainer" containerID="786f8582b1d464af042106b58dc4a961d37e50defef7db41bb247eaa82ebf765" Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.197005 4730 scope.go:117] "RemoveContainer" containerID="2102570acd0d4063edb8ff73bbc2db62d76245e54759273f9b6b29e86aa93a9b" Jan 31 16:45:42 crc kubenswrapper[4730]: E0131 16:45:42.197403 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.201953 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4dcfm" event={"ID":"97354e1f-e4b3-4f45-a9f6-58d1932e9f45","Type":"ContainerStarted","Data":"a70290e67b17609f1283b739183b4f34d9f8bb657295e289e09bc4f1ec3a7675"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.211774 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ltdm6" event={"ID":"1c3c71a4-f2bf-46da-9c7d-c7c4dba19585","Type":"ContainerStarted","Data":"082494acdd299598f4b5087889203890c64e231997812e4cd45b3a029662d476"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.211830 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ltdm6" event={"ID":"1c3c71a4-f2bf-46da-9c7d-c7c4dba19585","Type":"ContainerStarted","Data":"12ab995c21289628ff26857609f36d8714c7b048a54f3a0e98e925923ab0d9ec"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.222049 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l4plp" event={"ID":"d9f2bffc-75d1-4da3-be48-728edaf3e0be","Type":"ContainerStarted","Data":"6ad0c710e6c3c5af532d6d645f91243c6d5988b7372da3e6280a2db905129930"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.222109 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l4plp" event={"ID":"d9f2bffc-75d1-4da3-be48-728edaf3e0be","Type":"ContainerStarted","Data":"245ae040189ca63e4bbc357a2595b4942b15b2b629204571446a5cbf1be52cc7"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.227004 4730 scope.go:117] "RemoveContainer" containerID="b79ccc8f9f8687f81b72396372015d0c3b088360a39f057a123836960c51f360" Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.232115 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v46sw" event={"ID":"48dba275-7242-434b-b55e-1c62a25c7c1a","Type":"ContainerStarted","Data":"784173489c1fdf0e74003f738ded67dfae8a15956196405075bf03ccdfd982d3"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.232154 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v46sw" event={"ID":"48dba275-7242-434b-b55e-1c62a25c7c1a","Type":"ContainerStarted","Data":"d5b7fbf0d31c0bddd3b98f6882177c6cd68bd51e39b8ff222bd603869412835c"} Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.277425 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-v46sw" podStartSLOduration=8.277406219 podStartE2EDuration="8.277406219s" podCreationTimestamp="2026-01-31 16:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:45:42.272102232 +0000 UTC m=+929.078159138" watchObservedRunningTime="2026-01-31 16:45:42.277406219 +0000 UTC m=+929.083463145" Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.300393 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-l4plp" podStartSLOduration=8.300375781 podStartE2EDuration="8.300375781s" podCreationTimestamp="2026-01-31 16:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:45:42.290790924 +0000 UTC m=+929.096847840" watchObservedRunningTime="2026-01-31 16:45:42.300375781 +0000 UTC m=+929.106432687" Jan 31 16:45:42 crc kubenswrapper[4730]: I0131 16:45:42.327641 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-ltdm6" podStartSLOduration=7.327619774 podStartE2EDuration="7.327619774s" podCreationTimestamp="2026-01-31 16:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:45:42.326545146 +0000 UTC m=+929.132602052" watchObservedRunningTime="2026-01-31 16:45:42.327619774 +0000 UTC m=+929.133676690" Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.241262 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v46sw" event={"ID":"48dba275-7242-434b-b55e-1c62a25c7c1a","Type":"ContainerDied","Data":"784173489c1fdf0e74003f738ded67dfae8a15956196405075bf03ccdfd982d3"} Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.241101 4730 generic.go:334] "Generic (PLEG): container finished" podID="48dba275-7242-434b-b55e-1c62a25c7c1a" containerID="784173489c1fdf0e74003f738ded67dfae8a15956196405075bf03ccdfd982d3" exitCode=0 Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.249321 4730 generic.go:334] "Generic (PLEG): container finished" podID="892cfb30-014c-4cdf-8822-dbcbe7dea46c" containerID="415c363beca5282f0080d3caf153f786edadf0d8213ad3a4683cf2a16c0bce64" exitCode=0 Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.249384 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-547f-account-create-update-4pbzk" event={"ID":"892cfb30-014c-4cdf-8822-dbcbe7dea46c","Type":"ContainerDied","Data":"415c363beca5282f0080d3caf153f786edadf0d8213ad3a4683cf2a16c0bce64"} Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.271611 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="97a0f22ff3ede34052fb983fbbdc8c26473187f948f5a0bcbbcd93e6b7bb8326" exitCode=1 Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.271687 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"97a0f22ff3ede34052fb983fbbdc8c26473187f948f5a0bcbbcd93e6b7bb8326"} Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.271746 4730 scope.go:117] "RemoveContainer" containerID="b05d384f284938e62a50508baa50781abb2b371b6922f2e11d344f430b2b032d" Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.272731 4730 scope.go:117] "RemoveContainer" containerID="786f8582b1d464af042106b58dc4a961d37e50defef7db41bb247eaa82ebf765" Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.272853 4730 scope.go:117] "RemoveContainer" containerID="2102570acd0d4063edb8ff73bbc2db62d76245e54759273f9b6b29e86aa93a9b" Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.272979 4730 scope.go:117] "RemoveContainer" containerID="97a0f22ff3ede34052fb983fbbdc8c26473187f948f5a0bcbbcd93e6b7bb8326" Jan 31 16:45:43 crc kubenswrapper[4730]: E0131 16:45:43.273342 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.278471 4730 generic.go:334] "Generic (PLEG): container finished" podID="c8fb99c8-b28a-450a-8692-e585216fbc53" containerID="3f4d0a9e999cf1d51a29a8d9e0aab5d18604850cecc4322a909b0ebe4fdeb3ad" exitCode=0 Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.278563 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lv6bn" event={"ID":"c8fb99c8-b28a-450a-8692-e585216fbc53","Type":"ContainerDied","Data":"3f4d0a9e999cf1d51a29a8d9e0aab5d18604850cecc4322a909b0ebe4fdeb3ad"} Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.281353 4730 generic.go:334] "Generic (PLEG): container finished" podID="1c3c71a4-f2bf-46da-9c7d-c7c4dba19585" containerID="082494acdd299598f4b5087889203890c64e231997812e4cd45b3a029662d476" exitCode=0 Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.281434 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ltdm6" event={"ID":"1c3c71a4-f2bf-46da-9c7d-c7c4dba19585","Type":"ContainerDied","Data":"082494acdd299598f4b5087889203890c64e231997812e4cd45b3a029662d476"} Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.293739 4730 generic.go:334] "Generic (PLEG): container finished" podID="d9f2bffc-75d1-4da3-be48-728edaf3e0be" containerID="6ad0c710e6c3c5af532d6d645f91243c6d5988b7372da3e6280a2db905129930" exitCode=0 Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.293938 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l4plp" event={"ID":"d9f2bffc-75d1-4da3-be48-728edaf3e0be","Type":"ContainerDied","Data":"6ad0c710e6c3c5af532d6d645f91243c6d5988b7372da3e6280a2db905129930"} Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.741873 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2b0d-account-create-update-rh8s6" Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.745493 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1dce-account-create-update-2crhq" Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.819736 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkxvh\" (UniqueName: \"kubernetes.io/projected/6b163a61-8109-4989-ada6-8e408c05448d-kube-api-access-bkxvh\") pod \"6b163a61-8109-4989-ada6-8e408c05448d\" (UID: \"6b163a61-8109-4989-ada6-8e408c05448d\") " Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.819858 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b163a61-8109-4989-ada6-8e408c05448d-operator-scripts\") pod \"6b163a61-8109-4989-ada6-8e408c05448d\" (UID: \"6b163a61-8109-4989-ada6-8e408c05448d\") " Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.819924 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kts5z\" (UniqueName: \"kubernetes.io/projected/2111311f-b72a-4c59-84a4-4c97bfa06105-kube-api-access-kts5z\") pod \"2111311f-b72a-4c59-84a4-4c97bfa06105\" (UID: \"2111311f-b72a-4c59-84a4-4c97bfa06105\") " Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.819975 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2111311f-b72a-4c59-84a4-4c97bfa06105-operator-scripts\") pod \"2111311f-b72a-4c59-84a4-4c97bfa06105\" (UID: \"2111311f-b72a-4c59-84a4-4c97bfa06105\") " Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.820963 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b163a61-8109-4989-ada6-8e408c05448d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b163a61-8109-4989-ada6-8e408c05448d" (UID: "6b163a61-8109-4989-ada6-8e408c05448d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.821015 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2111311f-b72a-4c59-84a4-4c97bfa06105-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2111311f-b72a-4c59-84a4-4c97bfa06105" (UID: "2111311f-b72a-4c59-84a4-4c97bfa06105"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.821401 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2111311f-b72a-4c59-84a4-4c97bfa06105-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.821417 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b163a61-8109-4989-ada6-8e408c05448d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.827588 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b163a61-8109-4989-ada6-8e408c05448d-kube-api-access-bkxvh" (OuterVolumeSpecName: "kube-api-access-bkxvh") pod "6b163a61-8109-4989-ada6-8e408c05448d" (UID: "6b163a61-8109-4989-ada6-8e408c05448d"). InnerVolumeSpecName "kube-api-access-bkxvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.831886 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2111311f-b72a-4c59-84a4-4c97bfa06105-kube-api-access-kts5z" (OuterVolumeSpecName: "kube-api-access-kts5z") pod "2111311f-b72a-4c59-84a4-4c97bfa06105" (UID: "2111311f-b72a-4c59-84a4-4c97bfa06105"). InnerVolumeSpecName "kube-api-access-kts5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.922729 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkxvh\" (UniqueName: \"kubernetes.io/projected/6b163a61-8109-4989-ada6-8e408c05448d-kube-api-access-bkxvh\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:43 crc kubenswrapper[4730]: I0131 16:45:43.922759 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kts5z\" (UniqueName: \"kubernetes.io/projected/2111311f-b72a-4c59-84a4-4c97bfa06105-kube-api-access-kts5z\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:44 crc kubenswrapper[4730]: I0131 16:45:44.305632 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2b0d-account-create-update-rh8s6" event={"ID":"2111311f-b72a-4c59-84a4-4c97bfa06105","Type":"ContainerDied","Data":"b1f12de80a0c3d57b8755c477ae372f5b7177b900b6144e597a993852aef4c85"} Jan 31 16:45:44 crc kubenswrapper[4730]: I0131 16:45:44.305674 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1f12de80a0c3d57b8755c477ae372f5b7177b900b6144e597a993852aef4c85" Jan 31 16:45:44 crc kubenswrapper[4730]: I0131 16:45:44.305695 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2b0d-account-create-update-rh8s6" Jan 31 16:45:44 crc kubenswrapper[4730]: I0131 16:45:44.315632 4730 scope.go:117] "RemoveContainer" containerID="786f8582b1d464af042106b58dc4a961d37e50defef7db41bb247eaa82ebf765" Jan 31 16:45:44 crc kubenswrapper[4730]: I0131 16:45:44.315698 4730 scope.go:117] "RemoveContainer" containerID="2102570acd0d4063edb8ff73bbc2db62d76245e54759273f9b6b29e86aa93a9b" Jan 31 16:45:44 crc kubenswrapper[4730]: I0131 16:45:44.315794 4730 scope.go:117] "RemoveContainer" containerID="97a0f22ff3ede34052fb983fbbdc8c26473187f948f5a0bcbbcd93e6b7bb8326" Jan 31 16:45:44 crc kubenswrapper[4730]: E0131 16:45:44.316213 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:45:44 crc kubenswrapper[4730]: I0131 16:45:44.317373 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1dce-account-create-update-2crhq" event={"ID":"6b163a61-8109-4989-ada6-8e408c05448d","Type":"ContainerDied","Data":"326c410925fa0e8175cfa9e5c9615eb92393e8dfe033ca21285003a9569de13a"} Jan 31 16:45:44 crc kubenswrapper[4730]: I0131 16:45:44.317421 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="326c410925fa0e8175cfa9e5c9615eb92393e8dfe033ca21285003a9569de13a" Jan 31 16:45:44 crc kubenswrapper[4730]: I0131 16:45:44.317447 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1dce-account-create-update-2crhq" Jan 31 16:45:45 crc kubenswrapper[4730]: I0131 16:45:45.553218 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-66hvq" podUID="a81eb20f-04f9-4f66-b19a-19cd06c28329" containerName="registry-server" probeResult="failure" output=< Jan 31 16:45:45 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Jan 31 16:45:45 crc kubenswrapper[4730]: > Jan 31 16:45:46 crc kubenswrapper[4730]: I0131 16:45:46.923592 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l4plp" Jan 31 16:45:46 crc kubenswrapper[4730]: I0131 16:45:46.950078 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lv6bn" Jan 31 16:45:46 crc kubenswrapper[4730]: I0131 16:45:46.955390 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-547f-account-create-update-4pbzk" Jan 31 16:45:46 crc kubenswrapper[4730]: I0131 16:45:46.967570 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v46sw" Jan 31 16:45:46 crc kubenswrapper[4730]: I0131 16:45:46.974073 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ltdm6" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.007055 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/892cfb30-014c-4cdf-8822-dbcbe7dea46c-operator-scripts\") pod \"892cfb30-014c-4cdf-8822-dbcbe7dea46c\" (UID: \"892cfb30-014c-4cdf-8822-dbcbe7dea46c\") " Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.007404 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwwb6\" (UniqueName: \"kubernetes.io/projected/48dba275-7242-434b-b55e-1c62a25c7c1a-kube-api-access-lwwb6\") pod \"48dba275-7242-434b-b55e-1c62a25c7c1a\" (UID: \"48dba275-7242-434b-b55e-1c62a25c7c1a\") " Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.007440 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhn9g\" (UniqueName: \"kubernetes.io/projected/d9f2bffc-75d1-4da3-be48-728edaf3e0be-kube-api-access-bhn9g\") pod \"d9f2bffc-75d1-4da3-be48-728edaf3e0be\" (UID: \"d9f2bffc-75d1-4da3-be48-728edaf3e0be\") " Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.007486 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9f2bffc-75d1-4da3-be48-728edaf3e0be-operator-scripts\") pod \"d9f2bffc-75d1-4da3-be48-728edaf3e0be\" (UID: \"d9f2bffc-75d1-4da3-be48-728edaf3e0be\") " Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.007508 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48dba275-7242-434b-b55e-1c62a25c7c1a-operator-scripts\") pod \"48dba275-7242-434b-b55e-1c62a25c7c1a\" (UID: \"48dba275-7242-434b-b55e-1c62a25c7c1a\") " Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.007536 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpqqf\" (UniqueName: \"kubernetes.io/projected/c8fb99c8-b28a-450a-8692-e585216fbc53-kube-api-access-dpqqf\") pod \"c8fb99c8-b28a-450a-8692-e585216fbc53\" (UID: \"c8fb99c8-b28a-450a-8692-e585216fbc53\") " Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.007566 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvjnl\" (UniqueName: \"kubernetes.io/projected/1c3c71a4-f2bf-46da-9c7d-c7c4dba19585-kube-api-access-pvjnl\") pod \"1c3c71a4-f2bf-46da-9c7d-c7c4dba19585\" (UID: \"1c3c71a4-f2bf-46da-9c7d-c7c4dba19585\") " Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.007581 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8fb99c8-b28a-450a-8692-e585216fbc53-operator-scripts\") pod \"c8fb99c8-b28a-450a-8692-e585216fbc53\" (UID: \"c8fb99c8-b28a-450a-8692-e585216fbc53\") " Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.007598 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c3c71a4-f2bf-46da-9c7d-c7c4dba19585-operator-scripts\") pod \"1c3c71a4-f2bf-46da-9c7d-c7c4dba19585\" (UID: \"1c3c71a4-f2bf-46da-9c7d-c7c4dba19585\") " Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.007621 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l2d8\" (UniqueName: \"kubernetes.io/projected/892cfb30-014c-4cdf-8822-dbcbe7dea46c-kube-api-access-8l2d8\") pod \"892cfb30-014c-4cdf-8822-dbcbe7dea46c\" (UID: \"892cfb30-014c-4cdf-8822-dbcbe7dea46c\") " Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.008756 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48dba275-7242-434b-b55e-1c62a25c7c1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48dba275-7242-434b-b55e-1c62a25c7c1a" (UID: "48dba275-7242-434b-b55e-1c62a25c7c1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.008756 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/892cfb30-014c-4cdf-8822-dbcbe7dea46c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "892cfb30-014c-4cdf-8822-dbcbe7dea46c" (UID: "892cfb30-014c-4cdf-8822-dbcbe7dea46c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.009120 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9f2bffc-75d1-4da3-be48-728edaf3e0be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d9f2bffc-75d1-4da3-be48-728edaf3e0be" (UID: "d9f2bffc-75d1-4da3-be48-728edaf3e0be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.009387 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c3c71a4-f2bf-46da-9c7d-c7c4dba19585-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c3c71a4-f2bf-46da-9c7d-c7c4dba19585" (UID: "1c3c71a4-f2bf-46da-9c7d-c7c4dba19585"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.009406 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8fb99c8-b28a-450a-8692-e585216fbc53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8fb99c8-b28a-450a-8692-e585216fbc53" (UID: "c8fb99c8-b28a-450a-8692-e585216fbc53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.012554 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/892cfb30-014c-4cdf-8822-dbcbe7dea46c-kube-api-access-8l2d8" (OuterVolumeSpecName: "kube-api-access-8l2d8") pod "892cfb30-014c-4cdf-8822-dbcbe7dea46c" (UID: "892cfb30-014c-4cdf-8822-dbcbe7dea46c"). InnerVolumeSpecName "kube-api-access-8l2d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.020815 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3c71a4-f2bf-46da-9c7d-c7c4dba19585-kube-api-access-pvjnl" (OuterVolumeSpecName: "kube-api-access-pvjnl") pod "1c3c71a4-f2bf-46da-9c7d-c7c4dba19585" (UID: "1c3c71a4-f2bf-46da-9c7d-c7c4dba19585"). InnerVolumeSpecName "kube-api-access-pvjnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.025185 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f2bffc-75d1-4da3-be48-728edaf3e0be-kube-api-access-bhn9g" (OuterVolumeSpecName: "kube-api-access-bhn9g") pod "d9f2bffc-75d1-4da3-be48-728edaf3e0be" (UID: "d9f2bffc-75d1-4da3-be48-728edaf3e0be"). InnerVolumeSpecName "kube-api-access-bhn9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.025291 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48dba275-7242-434b-b55e-1c62a25c7c1a-kube-api-access-lwwb6" (OuterVolumeSpecName: "kube-api-access-lwwb6") pod "48dba275-7242-434b-b55e-1c62a25c7c1a" (UID: "48dba275-7242-434b-b55e-1c62a25c7c1a"). InnerVolumeSpecName "kube-api-access-lwwb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.028101 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8fb99c8-b28a-450a-8692-e585216fbc53-kube-api-access-dpqqf" (OuterVolumeSpecName: "kube-api-access-dpqqf") pod "c8fb99c8-b28a-450a-8692-e585216fbc53" (UID: "c8fb99c8-b28a-450a-8692-e585216fbc53"). InnerVolumeSpecName "kube-api-access-dpqqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.108696 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l2d8\" (UniqueName: \"kubernetes.io/projected/892cfb30-014c-4cdf-8822-dbcbe7dea46c-kube-api-access-8l2d8\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.108723 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/892cfb30-014c-4cdf-8822-dbcbe7dea46c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.108735 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwwb6\" (UniqueName: \"kubernetes.io/projected/48dba275-7242-434b-b55e-1c62a25c7c1a-kube-api-access-lwwb6\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.108743 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhn9g\" (UniqueName: \"kubernetes.io/projected/d9f2bffc-75d1-4da3-be48-728edaf3e0be-kube-api-access-bhn9g\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.108753 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9f2bffc-75d1-4da3-be48-728edaf3e0be-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.108761 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48dba275-7242-434b-b55e-1c62a25c7c1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.108770 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpqqf\" (UniqueName: \"kubernetes.io/projected/c8fb99c8-b28a-450a-8692-e585216fbc53-kube-api-access-dpqqf\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.108779 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvjnl\" (UniqueName: \"kubernetes.io/projected/1c3c71a4-f2bf-46da-9c7d-c7c4dba19585-kube-api-access-pvjnl\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.108787 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8fb99c8-b28a-450a-8692-e585216fbc53-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.108794 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c3c71a4-f2bf-46da-9c7d-c7c4dba19585-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.351054 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ltdm6" event={"ID":"1c3c71a4-f2bf-46da-9c7d-c7c4dba19585","Type":"ContainerDied","Data":"12ab995c21289628ff26857609f36d8714c7b048a54f3a0e98e925923ab0d9ec"} Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.352917 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12ab995c21289628ff26857609f36d8714c7b048a54f3a0e98e925923ab0d9ec" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.353036 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ltdm6" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.355125 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l4plp" event={"ID":"d9f2bffc-75d1-4da3-be48-728edaf3e0be","Type":"ContainerDied","Data":"245ae040189ca63e4bbc357a2595b4942b15b2b629204571446a5cbf1be52cc7"} Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.355162 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="245ae040189ca63e4bbc357a2595b4942b15b2b629204571446a5cbf1be52cc7" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.355253 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l4plp" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.358717 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v46sw" event={"ID":"48dba275-7242-434b-b55e-1c62a25c7c1a","Type":"ContainerDied","Data":"d5b7fbf0d31c0bddd3b98f6882177c6cd68bd51e39b8ff222bd603869412835c"} Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.358780 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5b7fbf0d31c0bddd3b98f6882177c6cd68bd51e39b8ff222bd603869412835c" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.358924 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v46sw" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.365433 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-547f-account-create-update-4pbzk" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.365982 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-547f-account-create-update-4pbzk" event={"ID":"892cfb30-014c-4cdf-8822-dbcbe7dea46c","Type":"ContainerDied","Data":"5f5801f9608edc9a6263a52a093fd8185a628c8b22c6d77b43f059ff4a5a3707"} Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.366118 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f5801f9608edc9a6263a52a093fd8185a628c8b22c6d77b43f059ff4a5a3707" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.368132 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lv6bn" event={"ID":"c8fb99c8-b28a-450a-8692-e585216fbc53","Type":"ContainerDied","Data":"348d8b26136a20110c1fcda6250b8af0bb98d1c467d7d177a22a58bdc604615e"} Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.368164 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="348d8b26136a20110c1fcda6250b8af0bb98d1c467d7d177a22a58bdc604615e" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.368231 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lv6bn" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.376712 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4dcfm" event={"ID":"97354e1f-e4b3-4f45-a9f6-58d1932e9f45","Type":"ContainerStarted","Data":"4e6f6b95da70e5c197514d2d0a23e4491b78add6b0e8c9997c68fca337e92683"} Jan 31 16:45:47 crc kubenswrapper[4730]: E0131 16:45:47.445230 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c3c71a4_f2bf_46da_9c7d_c7c4dba19585.slice/crio-12ab995c21289628ff26857609f36d8714c7b048a54f3a0e98e925923ab0d9ec\": RecentStats: unable to find data in memory cache]" Jan 31 16:45:47 crc kubenswrapper[4730]: I0131 16:45:47.958084 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4dcfm" podStartSLOduration=8.843824142999999 podStartE2EDuration="13.958064847s" podCreationTimestamp="2026-01-31 16:45:34 +0000 UTC" firstStartedPulling="2026-01-31 16:45:41.602391532 +0000 UTC m=+928.408448448" lastFinishedPulling="2026-01-31 16:45:46.716632236 +0000 UTC m=+933.522689152" observedRunningTime="2026-01-31 16:45:47.402981365 +0000 UTC m=+934.209038301" watchObservedRunningTime="2026-01-31 16:45:47.958064847 +0000 UTC m=+934.764121763" Jan 31 16:45:50 crc kubenswrapper[4730]: I0131 16:45:50.410520 4730 generic.go:334] "Generic (PLEG): container finished" podID="627cf9cc-1e11-455d-b186-f159d4eed39c" containerID="4f07dcc150b2023774fde7bb4915ca967d8b8644c88104b9125b3fac66c92813" exitCode=0 Jan 31 16:45:50 crc kubenswrapper[4730]: I0131 16:45:50.410588 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5vxrp" event={"ID":"627cf9cc-1e11-455d-b186-f159d4eed39c","Type":"ContainerDied","Data":"4f07dcc150b2023774fde7bb4915ca967d8b8644c88104b9125b3fac66c92813"} Jan 31 16:45:51 crc kubenswrapper[4730]: I0131 16:45:51.425668 4730 generic.go:334] "Generic (PLEG): container finished" podID="97354e1f-e4b3-4f45-a9f6-58d1932e9f45" containerID="4e6f6b95da70e5c197514d2d0a23e4491b78add6b0e8c9997c68fca337e92683" exitCode=0 Jan 31 16:45:51 crc kubenswrapper[4730]: I0131 16:45:51.425784 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4dcfm" event={"ID":"97354e1f-e4b3-4f45-a9f6-58d1932e9f45","Type":"ContainerDied","Data":"4e6f6b95da70e5c197514d2d0a23e4491b78add6b0e8c9997c68fca337e92683"} Jan 31 16:45:51 crc kubenswrapper[4730]: I0131 16:45:51.895759 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5vxrp" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.015543 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-db-sync-config-data\") pod \"627cf9cc-1e11-455d-b186-f159d4eed39c\" (UID: \"627cf9cc-1e11-455d-b186-f159d4eed39c\") " Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.015608 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-combined-ca-bundle\") pod \"627cf9cc-1e11-455d-b186-f159d4eed39c\" (UID: \"627cf9cc-1e11-455d-b186-f159d4eed39c\") " Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.015636 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-config-data\") pod \"627cf9cc-1e11-455d-b186-f159d4eed39c\" (UID: \"627cf9cc-1e11-455d-b186-f159d4eed39c\") " Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.016252 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmpp5\" (UniqueName: \"kubernetes.io/projected/627cf9cc-1e11-455d-b186-f159d4eed39c-kube-api-access-tmpp5\") pod \"627cf9cc-1e11-455d-b186-f159d4eed39c\" (UID: \"627cf9cc-1e11-455d-b186-f159d4eed39c\") " Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.021363 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "627cf9cc-1e11-455d-b186-f159d4eed39c" (UID: "627cf9cc-1e11-455d-b186-f159d4eed39c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.036034 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/627cf9cc-1e11-455d-b186-f159d4eed39c-kube-api-access-tmpp5" (OuterVolumeSpecName: "kube-api-access-tmpp5") pod "627cf9cc-1e11-455d-b186-f159d4eed39c" (UID: "627cf9cc-1e11-455d-b186-f159d4eed39c"). InnerVolumeSpecName "kube-api-access-tmpp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.060719 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "627cf9cc-1e11-455d-b186-f159d4eed39c" (UID: "627cf9cc-1e11-455d-b186-f159d4eed39c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.065510 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-config-data" (OuterVolumeSpecName: "config-data") pod "627cf9cc-1e11-455d-b186-f159d4eed39c" (UID: "627cf9cc-1e11-455d-b186-f159d4eed39c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.118124 4730 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.118241 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.118304 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627cf9cc-1e11-455d-b186-f159d4eed39c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.118366 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmpp5\" (UniqueName: \"kubernetes.io/projected/627cf9cc-1e11-455d-b186-f159d4eed39c-kube-api-access-tmpp5\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.434031 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5vxrp" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.434972 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5vxrp" event={"ID":"627cf9cc-1e11-455d-b186-f159d4eed39c","Type":"ContainerDied","Data":"da08308eb3541103a04531ab5cec124e93d64d789abf08c7735f189f28ac38a5"} Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.435033 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da08308eb3541103a04531ab5cec124e93d64d789abf08c7735f189f28ac38a5" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.816602 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4dcfm" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.835527 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-config-data\") pod \"97354e1f-e4b3-4f45-a9f6-58d1932e9f45\" (UID: \"97354e1f-e4b3-4f45-a9f6-58d1932e9f45\") " Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.897938 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-config-data" (OuterVolumeSpecName: "config-data") pod "97354e1f-e4b3-4f45-a9f6-58d1932e9f45" (UID: "97354e1f-e4b3-4f45-a9f6-58d1932e9f45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.911716 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-fc2xz"] Jan 31 16:45:52 crc kubenswrapper[4730]: E0131 16:45:52.912033 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3c71a4-f2bf-46da-9c7d-c7c4dba19585" containerName="mariadb-database-create" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912048 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3c71a4-f2bf-46da-9c7d-c7c4dba19585" containerName="mariadb-database-create" Jan 31 16:45:52 crc kubenswrapper[4730]: E0131 16:45:52.912060 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f2bffc-75d1-4da3-be48-728edaf3e0be" containerName="mariadb-database-create" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912067 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f2bffc-75d1-4da3-be48-728edaf3e0be" containerName="mariadb-database-create" Jan 31 16:45:52 crc kubenswrapper[4730]: E0131 16:45:52.912076 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97354e1f-e4b3-4f45-a9f6-58d1932e9f45" containerName="keystone-db-sync" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912082 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="97354e1f-e4b3-4f45-a9f6-58d1932e9f45" containerName="keystone-db-sync" Jan 31 16:45:52 crc kubenswrapper[4730]: E0131 16:45:52.912089 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48dba275-7242-434b-b55e-1c62a25c7c1a" containerName="mariadb-database-create" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912094 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="48dba275-7242-434b-b55e-1c62a25c7c1a" containerName="mariadb-database-create" Jan 31 16:45:52 crc kubenswrapper[4730]: E0131 16:45:52.912100 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8fb99c8-b28a-450a-8692-e585216fbc53" containerName="mariadb-account-create-update" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912106 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8fb99c8-b28a-450a-8692-e585216fbc53" containerName="mariadb-account-create-update" Jan 31 16:45:52 crc kubenswrapper[4730]: E0131 16:45:52.912120 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2111311f-b72a-4c59-84a4-4c97bfa06105" containerName="mariadb-account-create-update" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912125 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2111311f-b72a-4c59-84a4-4c97bfa06105" containerName="mariadb-account-create-update" Jan 31 16:45:52 crc kubenswrapper[4730]: E0131 16:45:52.912147 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b163a61-8109-4989-ada6-8e408c05448d" containerName="mariadb-account-create-update" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912154 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b163a61-8109-4989-ada6-8e408c05448d" containerName="mariadb-account-create-update" Jan 31 16:45:52 crc kubenswrapper[4730]: E0131 16:45:52.912164 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="892cfb30-014c-4cdf-8822-dbcbe7dea46c" containerName="mariadb-account-create-update" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912170 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="892cfb30-014c-4cdf-8822-dbcbe7dea46c" containerName="mariadb-account-create-update" Jan 31 16:45:52 crc kubenswrapper[4730]: E0131 16:45:52.912178 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="627cf9cc-1e11-455d-b186-f159d4eed39c" containerName="glance-db-sync" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912184 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="627cf9cc-1e11-455d-b186-f159d4eed39c" containerName="glance-db-sync" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912322 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="627cf9cc-1e11-455d-b186-f159d4eed39c" containerName="glance-db-sync" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912337 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2111311f-b72a-4c59-84a4-4c97bfa06105" containerName="mariadb-account-create-update" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912345 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f2bffc-75d1-4da3-be48-728edaf3e0be" containerName="mariadb-database-create" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912355 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="892cfb30-014c-4cdf-8822-dbcbe7dea46c" containerName="mariadb-account-create-update" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912367 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8fb99c8-b28a-450a-8692-e585216fbc53" containerName="mariadb-account-create-update" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912376 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="48dba275-7242-434b-b55e-1c62a25c7c1a" containerName="mariadb-database-create" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912386 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b163a61-8109-4989-ada6-8e408c05448d" containerName="mariadb-account-create-update" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912395 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3c71a4-f2bf-46da-9c7d-c7c4dba19585" containerName="mariadb-database-create" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.912402 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="97354e1f-e4b3-4f45-a9f6-58d1932e9f45" containerName="keystone-db-sync" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.913141 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.939386 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s96p\" (UniqueName: \"kubernetes.io/projected/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-kube-api-access-5s96p\") pod \"97354e1f-e4b3-4f45-a9f6-58d1932e9f45\" (UID: \"97354e1f-e4b3-4f45-a9f6-58d1932e9f45\") " Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.939429 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-combined-ca-bundle\") pod \"97354e1f-e4b3-4f45-a9f6-58d1932e9f45\" (UID: \"97354e1f-e4b3-4f45-a9f6-58d1932e9f45\") " Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.939755 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.954499 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-kube-api-access-5s96p" (OuterVolumeSpecName: "kube-api-access-5s96p") pod "97354e1f-e4b3-4f45-a9f6-58d1932e9f45" (UID: "97354e1f-e4b3-4f45-a9f6-58d1932e9f45"). InnerVolumeSpecName "kube-api-access-5s96p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:52 crc kubenswrapper[4730]: I0131 16:45:52.957670 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-fc2xz"] Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.001041 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97354e1f-e4b3-4f45-a9f6-58d1932e9f45" (UID: "97354e1f-e4b3-4f45-a9f6-58d1932e9f45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.041600 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-fc2xz\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.041645 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-fc2xz\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.041675 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-fc2xz\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.041712 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppcmt\" (UniqueName: \"kubernetes.io/projected/b455578a-dbb5-4775-acb1-02640d25619c-kube-api-access-ppcmt\") pod \"dnsmasq-dns-5b946c75cc-fc2xz\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.041751 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-config\") pod \"dnsmasq-dns-5b946c75cc-fc2xz\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.041792 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s96p\" (UniqueName: \"kubernetes.io/projected/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-kube-api-access-5s96p\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.041818 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97354e1f-e4b3-4f45-a9f6-58d1932e9f45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.143643 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-fc2xz\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.143699 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-fc2xz\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.143742 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppcmt\" (UniqueName: \"kubernetes.io/projected/b455578a-dbb5-4775-acb1-02640d25619c-kube-api-access-ppcmt\") pod \"dnsmasq-dns-5b946c75cc-fc2xz\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.143792 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-config\") pod \"dnsmasq-dns-5b946c75cc-fc2xz\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.143879 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-fc2xz\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.145100 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-config\") pod \"dnsmasq-dns-5b946c75cc-fc2xz\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.145152 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-fc2xz\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.145261 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-fc2xz\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.145968 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-fc2xz\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.163140 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppcmt\" (UniqueName: \"kubernetes.io/projected/b455578a-dbb5-4775-acb1-02640d25619c-kube-api-access-ppcmt\") pod \"dnsmasq-dns-5b946c75cc-fc2xz\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.307369 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.443850 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4dcfm" event={"ID":"97354e1f-e4b3-4f45-a9f6-58d1932e9f45","Type":"ContainerDied","Data":"a70290e67b17609f1283b739183b4f34d9f8bb657295e289e09bc4f1ec3a7675"} Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.443897 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a70290e67b17609f1283b739183b4f34d9f8bb657295e289e09bc4f1ec3a7675" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.443910 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4dcfm" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.642784 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5lwz6"] Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.643689 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.653207 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.653309 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.653372 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.653463 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-n4fjp" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.653509 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.654873 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znc9p\" (UniqueName: \"kubernetes.io/projected/bc0c867e-0453-4770-889a-6d7c6ed361da-kube-api-access-znc9p\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.654919 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-config-data\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.654972 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-fernet-keys\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.655060 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-combined-ca-bundle\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.655084 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-credential-keys\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.655103 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-scripts\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.664532 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5lwz6"] Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.676622 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-fc2xz"] Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.721822 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-784f69c749-8tbtm"] Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.723040 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.756337 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hz9x\" (UniqueName: \"kubernetes.io/projected/a613cf58-5b4f-4444-89b6-9c8cd68325b0-kube-api-access-7hz9x\") pod \"dnsmasq-dns-784f69c749-8tbtm\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.756655 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znc9p\" (UniqueName: \"kubernetes.io/projected/bc0c867e-0453-4770-889a-6d7c6ed361da-kube-api-access-znc9p\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.756686 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-config-data\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.756714 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-ovsdbserver-sb\") pod \"dnsmasq-dns-784f69c749-8tbtm\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.756747 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-fernet-keys\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.756789 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-config\") pod \"dnsmasq-dns-784f69c749-8tbtm\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.756840 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-ovsdbserver-nb\") pod \"dnsmasq-dns-784f69c749-8tbtm\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.756866 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-dns-svc\") pod \"dnsmasq-dns-784f69c749-8tbtm\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.756889 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-combined-ca-bundle\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.756908 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-credential-keys\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.756927 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-scripts\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.763465 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-config-data\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.769440 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-scripts\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.769458 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-combined-ca-bundle\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.771047 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-credential-keys\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.777916 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-fernet-keys\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.792502 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784f69c749-8tbtm"] Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.820088 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znc9p\" (UniqueName: \"kubernetes.io/projected/bc0c867e-0453-4770-889a-6d7c6ed361da-kube-api-access-znc9p\") pod \"keystone-bootstrap-5lwz6\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.857582 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-config\") pod \"dnsmasq-dns-784f69c749-8tbtm\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.857638 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-ovsdbserver-nb\") pod \"dnsmasq-dns-784f69c749-8tbtm\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.857663 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-dns-svc\") pod \"dnsmasq-dns-784f69c749-8tbtm\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.857701 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hz9x\" (UniqueName: \"kubernetes.io/projected/a613cf58-5b4f-4444-89b6-9c8cd68325b0-kube-api-access-7hz9x\") pod \"dnsmasq-dns-784f69c749-8tbtm\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.857742 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-ovsdbserver-sb\") pod \"dnsmasq-dns-784f69c749-8tbtm\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.858688 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-config\") pod \"dnsmasq-dns-784f69c749-8tbtm\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.859173 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-ovsdbserver-nb\") pod \"dnsmasq-dns-784f69c749-8tbtm\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.859622 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-dns-svc\") pod \"dnsmasq-dns-784f69c749-8tbtm\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.859986 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-ovsdbserver-sb\") pod \"dnsmasq-dns-784f69c749-8tbtm\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.903044 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hz9x\" (UniqueName: \"kubernetes.io/projected/a613cf58-5b4f-4444-89b6-9c8cd68325b0-kube-api-access-7hz9x\") pod \"dnsmasq-dns-784f69c749-8tbtm\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.914342 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-fc2xz"] Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.975146 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.977369 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-rw222"] Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.978325 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rw222" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.987229 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.987563 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 16:45:53 crc kubenswrapper[4730]: I0131 16:45:53.987868 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-2bx94" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.062684 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc429\" (UniqueName: \"kubernetes.io/projected/7cf9dbf3-9160-439f-96d0-4437019ae012-kube-api-access-bc429\") pod \"neutron-db-sync-rw222\" (UID: \"7cf9dbf3-9160-439f-96d0-4437019ae012\") " pod="openstack/neutron-db-sync-rw222" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.062756 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7cf9dbf3-9160-439f-96d0-4437019ae012-config\") pod \"neutron-db-sync-rw222\" (UID: \"7cf9dbf3-9160-439f-96d0-4437019ae012\") " pod="openstack/neutron-db-sync-rw222" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.063068 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf9dbf3-9160-439f-96d0-4437019ae012-combined-ca-bundle\") pod \"neutron-db-sync-rw222\" (UID: \"7cf9dbf3-9160-439f-96d0-4437019ae012\") " pod="openstack/neutron-db-sync-rw222" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.063084 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.163852 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc429\" (UniqueName: \"kubernetes.io/projected/7cf9dbf3-9160-439f-96d0-4437019ae012-kube-api-access-bc429\") pod \"neutron-db-sync-rw222\" (UID: \"7cf9dbf3-9160-439f-96d0-4437019ae012\") " pod="openstack/neutron-db-sync-rw222" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.163935 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7cf9dbf3-9160-439f-96d0-4437019ae012-config\") pod \"neutron-db-sync-rw222\" (UID: \"7cf9dbf3-9160-439f-96d0-4437019ae012\") " pod="openstack/neutron-db-sync-rw222" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.163998 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf9dbf3-9160-439f-96d0-4437019ae012-combined-ca-bundle\") pod \"neutron-db-sync-rw222\" (UID: \"7cf9dbf3-9160-439f-96d0-4437019ae012\") " pod="openstack/neutron-db-sync-rw222" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.173488 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf9dbf3-9160-439f-96d0-4437019ae012-combined-ca-bundle\") pod \"neutron-db-sync-rw222\" (UID: \"7cf9dbf3-9160-439f-96d0-4437019ae012\") " pod="openstack/neutron-db-sync-rw222" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.181853 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7cf9dbf3-9160-439f-96d0-4437019ae012-config\") pod \"neutron-db-sync-rw222\" (UID: \"7cf9dbf3-9160-439f-96d0-4437019ae012\") " pod="openstack/neutron-db-sync-rw222" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.209721 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc429\" (UniqueName: \"kubernetes.io/projected/7cf9dbf3-9160-439f-96d0-4437019ae012-kube-api-access-bc429\") pod \"neutron-db-sync-rw222\" (UID: \"7cf9dbf3-9160-439f-96d0-4437019ae012\") " pod="openstack/neutron-db-sync-rw222" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.302518 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rw222"] Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.341362 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-67744bc4b5-tg4xw"] Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.342642 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.364467 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-xgxsd" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.364691 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.365090 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.371293 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.372679 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f143d45a-857a-4114-99eb-e1880e44ffbe-logs\") pod \"horizon-67744bc4b5-tg4xw\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.372721 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6xj7\" (UniqueName: \"kubernetes.io/projected/f143d45a-857a-4114-99eb-e1880e44ffbe-kube-api-access-l6xj7\") pod \"horizon-67744bc4b5-tg4xw\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.372749 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f143d45a-857a-4114-99eb-e1880e44ffbe-scripts\") pod \"horizon-67744bc4b5-tg4xw\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.372776 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f143d45a-857a-4114-99eb-e1880e44ffbe-horizon-secret-key\") pod \"horizon-67744bc4b5-tg4xw\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.372818 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f143d45a-857a-4114-99eb-e1880e44ffbe-config-data\") pod \"horizon-67744bc4b5-tg4xw\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.377322 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67744bc4b5-tg4xw"] Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.410152 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rw222" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.444675 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.457883 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.476036 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f143d45a-857a-4114-99eb-e1880e44ffbe-logs\") pod \"horizon-67744bc4b5-tg4xw\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.476091 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6xj7\" (UniqueName: \"kubernetes.io/projected/f143d45a-857a-4114-99eb-e1880e44ffbe-kube-api-access-l6xj7\") pod \"horizon-67744bc4b5-tg4xw\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.476118 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f143d45a-857a-4114-99eb-e1880e44ffbe-scripts\") pod \"horizon-67744bc4b5-tg4xw\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.476145 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f143d45a-857a-4114-99eb-e1880e44ffbe-horizon-secret-key\") pod \"horizon-67744bc4b5-tg4xw\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.476169 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f143d45a-857a-4114-99eb-e1880e44ffbe-config-data\") pod \"horizon-67744bc4b5-tg4xw\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.477781 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f143d45a-857a-4114-99eb-e1880e44ffbe-config-data\") pod \"horizon-67744bc4b5-tg4xw\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.484079 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f143d45a-857a-4114-99eb-e1880e44ffbe-logs\") pod \"horizon-67744bc4b5-tg4xw\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.492363 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f143d45a-857a-4114-99eb-e1880e44ffbe-horizon-secret-key\") pod \"horizon-67744bc4b5-tg4xw\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.493968 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f143d45a-857a-4114-99eb-e1880e44ffbe-scripts\") pod \"horizon-67744bc4b5-tg4xw\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.508190 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" event={"ID":"b455578a-dbb5-4775-acb1-02640d25619c","Type":"ContainerStarted","Data":"f248cb14b2cea773b47eaef70804b517a90526fbaf2160271e7e5d0a4075898a"} Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.525243 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.525442 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.561081 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.580651 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87zjb\" (UniqueName: \"kubernetes.io/projected/f0d3583d-f56f-4f4b-87cb-e748976d47f6-kube-api-access-87zjb\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.581050 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-config-data\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.581093 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-scripts\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.581159 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.581181 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0d3583d-f56f-4f4b-87cb-e748976d47f6-run-httpd\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.581234 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0d3583d-f56f-4f4b-87cb-e748976d47f6-log-httpd\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.581286 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.638226 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6xj7\" (UniqueName: \"kubernetes.io/projected/f143d45a-857a-4114-99eb-e1880e44ffbe-kube-api-access-l6xj7\") pod \"horizon-67744bc4b5-tg4xw\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.654021 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-xfklz"] Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.655060 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.665021 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-qpskq"] Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.666618 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-66hvq" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.667986 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qpskq" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.682375 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-scripts\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.682449 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.682469 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0d3583d-f56f-4f4b-87cb-e748976d47f6-run-httpd\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.682558 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0d3583d-f56f-4f4b-87cb-e748976d47f6-log-httpd\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.682592 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.682612 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87zjb\" (UniqueName: \"kubernetes.io/projected/f0d3583d-f56f-4f4b-87cb-e748976d47f6-kube-api-access-87zjb\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.682643 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-config-data\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.685494 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0d3583d-f56f-4f4b-87cb-e748976d47f6-run-httpd\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.696747 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.696966 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hdlj2" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.697173 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.697531 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0d3583d-f56f-4f4b-87cb-e748976d47f6-log-httpd\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.702035 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-scripts\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.705518 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.706161 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.708486 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.717293 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-5dw9r" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.717392 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.717597 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.750600 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-config-data\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.751078 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xfklz"] Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.767647 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784f69c749-8tbtm"] Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.774782 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qpskq"] Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.785775 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87zjb\" (UniqueName: \"kubernetes.io/projected/f0d3583d-f56f-4f4b-87cb-e748976d47f6-kube-api-access-87zjb\") pod \"ceilometer-0\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.788686 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-config-data\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.788750 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-combined-ca-bundle\") pod \"placement-db-sync-qpskq\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " pod="openstack/placement-db-sync-qpskq" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.788772 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53655839-53b2-46cb-b859-fdb3376bc398-etc-machine-id\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.788825 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-config-data\") pod \"placement-db-sync-qpskq\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " pod="openstack/placement-db-sync-qpskq" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.788889 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-combined-ca-bundle\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.788921 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-scripts\") pod \"placement-db-sync-qpskq\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " pod="openstack/placement-db-sync-qpskq" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.788942 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1243bfc-8196-4501-9b35-89e359501a00-logs\") pod \"placement-db-sync-qpskq\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " pod="openstack/placement-db-sync-qpskq" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.788962 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-scripts\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.788991 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-db-sync-config-data\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.789026 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwp7v\" (UniqueName: \"kubernetes.io/projected/f1243bfc-8196-4501-9b35-89e359501a00-kube-api-access-wwp7v\") pod \"placement-db-sync-qpskq\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " pod="openstack/placement-db-sync-qpskq" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.789049 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98jbd\" (UniqueName: \"kubernetes.io/projected/53655839-53b2-46cb-b859-fdb3376bc398-kube-api-access-98jbd\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.823187 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.864272 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-66hvq" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.890757 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-config-data\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.890837 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-combined-ca-bundle\") pod \"placement-db-sync-qpskq\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " pod="openstack/placement-db-sync-qpskq" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.890856 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53655839-53b2-46cb-b859-fdb3376bc398-etc-machine-id\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.890882 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-config-data\") pod \"placement-db-sync-qpskq\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " pod="openstack/placement-db-sync-qpskq" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.890926 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-combined-ca-bundle\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.890947 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-scripts\") pod \"placement-db-sync-qpskq\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " pod="openstack/placement-db-sync-qpskq" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.890963 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1243bfc-8196-4501-9b35-89e359501a00-logs\") pod \"placement-db-sync-qpskq\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " pod="openstack/placement-db-sync-qpskq" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.890977 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-scripts\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.891000 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-db-sync-config-data\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.891028 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwp7v\" (UniqueName: \"kubernetes.io/projected/f1243bfc-8196-4501-9b35-89e359501a00-kube-api-access-wwp7v\") pod \"placement-db-sync-qpskq\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " pod="openstack/placement-db-sync-qpskq" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.891047 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98jbd\" (UniqueName: \"kubernetes.io/projected/53655839-53b2-46cb-b859-fdb3376bc398-kube-api-access-98jbd\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.906451 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-scripts\") pod \"placement-db-sync-qpskq\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " pod="openstack/placement-db-sync-qpskq" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.908264 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1243bfc-8196-4501-9b35-89e359501a00-logs\") pod \"placement-db-sync-qpskq\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " pod="openstack/placement-db-sync-qpskq" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.908567 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53655839-53b2-46cb-b859-fdb3376bc398-etc-machine-id\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.915365 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-combined-ca-bundle\") pod \"placement-db-sync-qpskq\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " pod="openstack/placement-db-sync-qpskq" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.922918 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-config-data\") pod \"placement-db-sync-qpskq\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " pod="openstack/placement-db-sync-qpskq" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.928386 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-config-data\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.934277 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-db-sync-config-data\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.939410 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-combined-ca-bundle\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.939917 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.941172 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.949289 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-scripts\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.957440 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98jbd\" (UniqueName: \"kubernetes.io/projected/53655839-53b2-46cb-b859-fdb3376bc398-kube-api-access-98jbd\") pod \"cinder-db-sync-xfklz\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: W0131 16:45:54.973419 4730 reflector.go:561] object-"openstack"/"glance-default-external-config-data": failed to list *v1.Secret: secrets "glance-default-external-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 31 16:45:54 crc kubenswrapper[4730]: E0131 16:45:54.973462 4730 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-default-external-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"glance-default-external-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.973543 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-w5ds8" Jan 31 16:45:54 crc kubenswrapper[4730]: W0131 16:45:54.973721 4730 reflector.go:561] object-"openstack"/"glance-scripts": failed to list *v1.Secret: secrets "glance-scripts" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 31 16:45:54 crc kubenswrapper[4730]: E0131 16:45:54.973734 4730 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"glance-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.986167 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xfklz" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.990179 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-p65js"] Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.991580 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:54 crc kubenswrapper[4730]: I0131 16:45:54.992587 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwp7v\" (UniqueName: \"kubernetes.io/projected/f1243bfc-8196-4501-9b35-89e359501a00-kube-api-access-wwp7v\") pod \"placement-db-sync-qpskq\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " pod="openstack/placement-db-sync-qpskq" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.003648 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qpskq" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.140819 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-wkj2z"] Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.155233 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-logs\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.171132 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-config-data\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.171273 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-scripts\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.171302 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-p65js\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.171409 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdqcx\" (UniqueName: \"kubernetes.io/projected/fda90ccb-0cf0-45d3-88fd-c795848c9482-kube-api-access-hdqcx\") pod \"dnsmasq-dns-f84976bdf-p65js\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.171435 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-dns-svc\") pod \"dnsmasq-dns-f84976bdf-p65js\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.171525 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.171654 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv7sg\" (UniqueName: \"kubernetes.io/projected/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-kube-api-access-hv7sg\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.171693 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.171727 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-p65js\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.171763 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-config\") pod \"dnsmasq-dns-f84976bdf-p65js\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.171960 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.220581 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wkj2z" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.222853 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wkj2z"] Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.238395 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.246138 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nggww" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.284207 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdqcx\" (UniqueName: \"kubernetes.io/projected/fda90ccb-0cf0-45d3-88fd-c795848c9482-kube-api-access-hdqcx\") pod \"dnsmasq-dns-f84976bdf-p65js\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.284263 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-dns-svc\") pod \"dnsmasq-dns-f84976bdf-p65js\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.284307 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd279f9-efa4-4fb3-a6e0-655de1c20403-combined-ca-bundle\") pod \"barbican-db-sync-wkj2z\" (UID: \"2fd279f9-efa4-4fb3-a6e0-655de1c20403\") " pod="openstack/barbican-db-sync-wkj2z" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.284350 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.284382 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv7sg\" (UniqueName: \"kubernetes.io/projected/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-kube-api-access-hv7sg\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.284419 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.284441 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2fd279f9-efa4-4fb3-a6e0-655de1c20403-db-sync-config-data\") pod \"barbican-db-sync-wkj2z\" (UID: \"2fd279f9-efa4-4fb3-a6e0-655de1c20403\") " pod="openstack/barbican-db-sync-wkj2z" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.284461 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-p65js\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.284485 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-config\") pod \"dnsmasq-dns-f84976bdf-p65js\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.284525 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.284601 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-logs\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.284634 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5mqd\" (UniqueName: \"kubernetes.io/projected/2fd279f9-efa4-4fb3-a6e0-655de1c20403-kube-api-access-x5mqd\") pod \"barbican-db-sync-wkj2z\" (UID: \"2fd279f9-efa4-4fb3-a6e0-655de1c20403\") " pod="openstack/barbican-db-sync-wkj2z" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.284694 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-scripts\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.284718 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-config-data\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.284734 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-p65js\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.285732 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-p65js\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.290599 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-p65js\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.292315 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.301424 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-config\") pod \"dnsmasq-dns-f84976bdf-p65js\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.313470 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-dns-svc\") pod \"dnsmasq-dns-f84976bdf-p65js\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.316546 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.316605 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69df784bcc-98p6s"] Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.316741 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-logs\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.321391 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.368590 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv7sg\" (UniqueName: \"kubernetes.io/projected/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-kube-api-access-hv7sg\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.369239 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdqcx\" (UniqueName: \"kubernetes.io/projected/fda90ccb-0cf0-45d3-88fd-c795848c9482-kube-api-access-hdqcx\") pod \"dnsmasq-dns-f84976bdf-p65js\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.418887 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.423205 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd279f9-efa4-4fb3-a6e0-655de1c20403-combined-ca-bundle\") pod \"barbican-db-sync-wkj2z\" (UID: \"2fd279f9-efa4-4fb3-a6e0-655de1c20403\") " pod="openstack/barbican-db-sync-wkj2z" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.423332 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2fd279f9-efa4-4fb3-a6e0-655de1c20403-db-sync-config-data\") pod \"barbican-db-sync-wkj2z\" (UID: \"2fd279f9-efa4-4fb3-a6e0-655de1c20403\") " pod="openstack/barbican-db-sync-wkj2z" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.423379 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00791e2a-6f2b-450d-acab-1ac4b91656ea-scripts\") pod \"horizon-69df784bcc-98p6s\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.423407 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00791e2a-6f2b-450d-acab-1ac4b91656ea-config-data\") pod \"horizon-69df784bcc-98p6s\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.423469 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00791e2a-6f2b-450d-acab-1ac4b91656ea-logs\") pod \"horizon-69df784bcc-98p6s\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.423513 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72457\" (UniqueName: \"kubernetes.io/projected/00791e2a-6f2b-450d-acab-1ac4b91656ea-kube-api-access-72457\") pod \"horizon-69df784bcc-98p6s\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.423549 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5mqd\" (UniqueName: \"kubernetes.io/projected/2fd279f9-efa4-4fb3-a6e0-655de1c20403-kube-api-access-x5mqd\") pod \"barbican-db-sync-wkj2z\" (UID: \"2fd279f9-efa4-4fb3-a6e0-655de1c20403\") " pod="openstack/barbican-db-sync-wkj2z" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.423594 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00791e2a-6f2b-450d-acab-1ac4b91656ea-horizon-secret-key\") pod \"horizon-69df784bcc-98p6s\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.437050 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2fd279f9-efa4-4fb3-a6e0-655de1c20403-db-sync-config-data\") pod \"barbican-db-sync-wkj2z\" (UID: \"2fd279f9-efa4-4fb3-a6e0-655de1c20403\") " pod="openstack/barbican-db-sync-wkj2z" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.439587 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.440018 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.452143 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd279f9-efa4-4fb3-a6e0-655de1c20403-combined-ca-bundle\") pod \"barbican-db-sync-wkj2z\" (UID: \"2fd279f9-efa4-4fb3-a6e0-655de1c20403\") " pod="openstack/barbican-db-sync-wkj2z" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.468345 4730 scope.go:117] "RemoveContainer" containerID="786f8582b1d464af042106b58dc4a961d37e50defef7db41bb247eaa82ebf765" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.473835 4730 scope.go:117] "RemoveContainer" containerID="2102570acd0d4063edb8ff73bbc2db62d76245e54759273f9b6b29e86aa93a9b" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.473938 4730 scope.go:117] "RemoveContainer" containerID="97a0f22ff3ede34052fb983fbbdc8c26473187f948f5a0bcbbcd93e6b7bb8326" Jan 31 16:45:55 crc kubenswrapper[4730]: E0131 16:45:55.474456 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.480089 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.480856 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5mqd\" (UniqueName: \"kubernetes.io/projected/2fd279f9-efa4-4fb3-a6e0-655de1c20403-kube-api-access-x5mqd\") pod \"barbican-db-sync-wkj2z\" (UID: \"2fd279f9-efa4-4fb3-a6e0-655de1c20403\") " pod="openstack/barbican-db-sync-wkj2z" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.523062 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5lwz6" event={"ID":"bc0c867e-0453-4770-889a-6d7c6ed361da","Type":"ContainerStarted","Data":"0010985fc8a1412a64867c98c544820e05d3768140eade089ec4251dc58e3ad1"} Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.524721 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00791e2a-6f2b-450d-acab-1ac4b91656ea-scripts\") pod \"horizon-69df784bcc-98p6s\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.524748 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00791e2a-6f2b-450d-acab-1ac4b91656ea-config-data\") pod \"horizon-69df784bcc-98p6s\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.524788 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00791e2a-6f2b-450d-acab-1ac4b91656ea-logs\") pod \"horizon-69df784bcc-98p6s\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.524833 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72457\" (UniqueName: \"kubernetes.io/projected/00791e2a-6f2b-450d-acab-1ac4b91656ea-kube-api-access-72457\") pod \"horizon-69df784bcc-98p6s\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.524915 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00791e2a-6f2b-450d-acab-1ac4b91656ea-horizon-secret-key\") pod \"horizon-69df784bcc-98p6s\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.527291 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00791e2a-6f2b-450d-acab-1ac4b91656ea-logs\") pod \"horizon-69df784bcc-98p6s\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.527792 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00791e2a-6f2b-450d-acab-1ac4b91656ea-scripts\") pod \"horizon-69df784bcc-98p6s\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.528715 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00791e2a-6f2b-450d-acab-1ac4b91656ea-config-data\") pod \"horizon-69df784bcc-98p6s\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.529477 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784f69c749-8tbtm"] Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.532656 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784f69c749-8tbtm" event={"ID":"a613cf58-5b4f-4444-89b6-9c8cd68325b0","Type":"ContainerStarted","Data":"2ec0978467dfd4b8f1b8a2b0e0886cc03d53aaa55dec21c88ef8aa8b7ae04094"} Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.539236 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-p65js"] Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.547173 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72457\" (UniqueName: \"kubernetes.io/projected/00791e2a-6f2b-450d-acab-1ac4b91656ea-kube-api-access-72457\") pod \"horizon-69df784bcc-98p6s\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.547235 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69df784bcc-98p6s"] Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.548201 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00791e2a-6f2b-450d-acab-1ac4b91656ea-horizon-secret-key\") pod \"horizon-69df784bcc-98p6s\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.590835 4730 generic.go:334] "Generic (PLEG): container finished" podID="b455578a-dbb5-4775-acb1-02640d25619c" containerID="30fc3510d13872c330ee42271aac40a4929d1eb0e6694a6cabfcb0021d893ed1" exitCode=0 Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.591909 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" event={"ID":"b455578a-dbb5-4775-acb1-02640d25619c","Type":"ContainerDied","Data":"30fc3510d13872c330ee42271aac40a4929d1eb0e6694a6cabfcb0021d893ed1"} Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.594299 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wkj2z" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.610378 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.620794 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.625422 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.706590 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:45:55 crc kubenswrapper[4730]: E0131 16:45:55.707464 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config-data scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-external-api-0" podUID="77765e97-6296-4f5f-83f0-9ff3ff09b5f2" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.712668 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.728841 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.728929 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.728952 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.728982 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1227fd84-b580-4bc1-84eb-2f90802a4a3d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.729003 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhktw\" (UniqueName: \"kubernetes.io/projected/1227fd84-b580-4bc1-84eb-2f90802a4a3d-kube-api-access-qhktw\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.729090 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1227fd84-b580-4bc1-84eb-2f90802a4a3d-logs\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.729109 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.729727 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.751223 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5lwz6"] Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.791868 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-66hvq"] Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.816046 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.825512 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rw222"] Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.831018 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.831065 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.831103 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1227fd84-b580-4bc1-84eb-2f90802a4a3d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.831125 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhktw\" (UniqueName: \"kubernetes.io/projected/1227fd84-b580-4bc1-84eb-2f90802a4a3d-kube-api-access-qhktw\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.831215 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1227fd84-b580-4bc1-84eb-2f90802a4a3d-logs\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.831240 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.831282 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.834547 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1227fd84-b580-4bc1-84eb-2f90802a4a3d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.835048 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.837961 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1227fd84-b580-4bc1-84eb-2f90802a4a3d-logs\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.861167 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-scripts\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.862458 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.864038 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.865085 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.868441 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhktw\" (UniqueName: \"kubernetes.io/projected/1227fd84-b580-4bc1-84eb-2f90802a4a3d-kube-api-access-qhktw\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.916933 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.984774 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 16:45:55 crc kubenswrapper[4730]: I0131 16:45:55.999536 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67744bc4b5-tg4xw"] Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.030198 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:45:56 crc kubenswrapper[4730]: E0131 16:45:56.317228 4730 secret.go:188] Couldn't get secret openstack/glance-default-external-config-data: failed to sync secret cache: timed out waiting for the condition Jan 31 16:45:56 crc kubenswrapper[4730]: E0131 16:45:56.317569 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-config-data podName:77765e97-6296-4f5f-83f0-9ff3ff09b5f2 nodeName:}" failed. No retries permitted until 2026-01-31 16:45:56.817546765 +0000 UTC m=+943.623603681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-config-data") pod "glance-default-external-api-0" (UID: "77765e97-6296-4f5f-83f0-9ff3ff09b5f2") : failed to sync secret cache: timed out waiting for the condition Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.352192 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.386171 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qpskq"] Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.392966 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xfklz"] Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.591236 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.604028 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-p65js"] Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.652606 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5lwz6" event={"ID":"bc0c867e-0453-4770-889a-6d7c6ed361da","Type":"ContainerStarted","Data":"8ab434ed6c460a0441f280ba8e6c81a3b4d8478e9ee9f29f20f740e872a262ef"} Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.688935 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rw222" event={"ID":"7cf9dbf3-9160-439f-96d0-4437019ae012","Type":"ContainerStarted","Data":"895cab6b16eb7a353f8c1bee26fe81294ee5929f5fd129be54f1b3481abf3bd9"} Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.688973 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rw222" event={"ID":"7cf9dbf3-9160-439f-96d0-4437019ae012","Type":"ContainerStarted","Data":"d2956a9184bafb91af198d2d2f3b5b260ed714368c8b9f9f4cedd8d001b68b25"} Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.699845 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-config\") pod \"b455578a-dbb5-4775-acb1-02640d25619c\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.699916 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppcmt\" (UniqueName: \"kubernetes.io/projected/b455578a-dbb5-4775-acb1-02640d25619c-kube-api-access-ppcmt\") pod \"b455578a-dbb5-4775-acb1-02640d25619c\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.699947 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-dns-svc\") pod \"b455578a-dbb5-4775-acb1-02640d25619c\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.700062 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-ovsdbserver-sb\") pod \"b455578a-dbb5-4775-acb1-02640d25619c\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.700142 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-ovsdbserver-nb\") pod \"b455578a-dbb5-4775-acb1-02640d25619c\" (UID: \"b455578a-dbb5-4775-acb1-02640d25619c\") " Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.711294 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67744bc4b5-tg4xw" event={"ID":"f143d45a-857a-4114-99eb-e1880e44ffbe","Type":"ContainerStarted","Data":"75a1b07196569ab4d3954ebcf2e5c5a329c85020103478824b430808a889e157"} Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.737138 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b455578a-dbb5-4775-acb1-02640d25619c-kube-api-access-ppcmt" (OuterVolumeSpecName: "kube-api-access-ppcmt") pod "b455578a-dbb5-4775-acb1-02640d25619c" (UID: "b455578a-dbb5-4775-acb1-02640d25619c"). InnerVolumeSpecName "kube-api-access-ppcmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.776059 4730 generic.go:334] "Generic (PLEG): container finished" podID="a613cf58-5b4f-4444-89b6-9c8cd68325b0" containerID="ababe122c802334635f196d2267768bc3dbb5513f71b72e1279e32ec8015e785" exitCode=0 Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.776138 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784f69c749-8tbtm" event={"ID":"a613cf58-5b4f-4444-89b6-9c8cd68325b0","Type":"ContainerDied","Data":"ababe122c802334635f196d2267768bc3dbb5513f71b72e1279e32ec8015e785"} Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.793976 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b455578a-dbb5-4775-acb1-02640d25619c" (UID: "b455578a-dbb5-4775-acb1-02640d25619c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.813878 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppcmt\" (UniqueName: \"kubernetes.io/projected/b455578a-dbb5-4775-acb1-02640d25619c-kube-api-access-ppcmt\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.813910 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.834076 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b455578a-dbb5-4775-acb1-02640d25619c" (UID: "b455578a-dbb5-4775-acb1-02640d25619c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.848545 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qpskq" event={"ID":"f1243bfc-8196-4501-9b35-89e359501a00","Type":"ContainerStarted","Data":"208633f4a6198468e989011d6d5db4d3af1ff561f21eccd315f154682adc436d"} Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.860101 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-config" (OuterVolumeSpecName: "config") pod "b455578a-dbb5-4775-acb1-02640d25619c" (UID: "b455578a-dbb5-4775-acb1-02640d25619c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.880368 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b455578a-dbb5-4775-acb1-02640d25619c" (UID: "b455578a-dbb5-4775-acb1-02640d25619c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.883631 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-rw222" podStartSLOduration=3.88361096 podStartE2EDuration="3.88361096s" podCreationTimestamp="2026-01-31 16:45:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:45:56.818533731 +0000 UTC m=+943.624590647" watchObservedRunningTime="2026-01-31 16:45:56.88361096 +0000 UTC m=+943.689667876" Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.905985 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5lwz6" podStartSLOduration=3.905952406 podStartE2EDuration="3.905952406s" podCreationTimestamp="2026-01-31 16:45:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:45:56.745362534 +0000 UTC m=+943.551419450" watchObservedRunningTime="2026-01-31 16:45:56.905952406 +0000 UTC m=+943.712009322" Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.965832 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0d3583d-f56f-4f4b-87cb-e748976d47f6","Type":"ContainerStarted","Data":"c0574d423338aeba52c57796ec24f2aee86ea7ca73766688662e346ebbf923f4"} Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.967829 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-config-data\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.967999 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.968015 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.968025 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b455578a-dbb5-4775-acb1-02640d25619c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:56 crc kubenswrapper[4730]: I0131 16:45:56.980054 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-config-data\") pod \"glance-default-external-api-0\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.019307 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" event={"ID":"b455578a-dbb5-4775-acb1-02640d25619c","Type":"ContainerDied","Data":"f248cb14b2cea773b47eaef70804b517a90526fbaf2160271e7e5d0a4075898a"} Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.019357 4730 scope.go:117] "RemoveContainer" containerID="30fc3510d13872c330ee42271aac40a4929d1eb0e6694a6cabfcb0021d893ed1" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.019474 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-fc2xz" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.053532 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wkj2z"] Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.095150 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.095704 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xfklz" event={"ID":"53655839-53b2-46cb-b859-fdb3376bc398","Type":"ContainerStarted","Data":"2a7219267bc555578d6669955a93307bda992ce779e73c38b4b618299a35f514"} Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.095859 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-66hvq" podUID="a81eb20f-04f9-4f66-b19a-19cd06c28329" containerName="registry-server" containerID="cri-o://41a686f2464e22e3ad094b3ba86d11e87ed5255556ef8f43e2a7d3e8a3082d2f" gracePeriod=2 Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.158937 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.288316 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-logs\") pod \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.288426 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv7sg\" (UniqueName: \"kubernetes.io/projected/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-kube-api-access-hv7sg\") pod \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.288461 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-scripts\") pod \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.288499 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-config-data\") pod \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.288570 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-httpd-run\") pod \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.288647 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-combined-ca-bundle\") pod \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.288684 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\" (UID: \"77765e97-6296-4f5f-83f0-9ff3ff09b5f2\") " Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.296707 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-logs" (OuterVolumeSpecName: "logs") pod "77765e97-6296-4f5f-83f0-9ff3ff09b5f2" (UID: "77765e97-6296-4f5f-83f0-9ff3ff09b5f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.296949 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "77765e97-6296-4f5f-83f0-9ff3ff09b5f2" (UID: "77765e97-6296-4f5f-83f0-9ff3ff09b5f2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.320233 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77765e97-6296-4f5f-83f0-9ff3ff09b5f2" (UID: "77765e97-6296-4f5f-83f0-9ff3ff09b5f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.324997 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-scripts" (OuterVolumeSpecName: "scripts") pod "77765e97-6296-4f5f-83f0-9ff3ff09b5f2" (UID: "77765e97-6296-4f5f-83f0-9ff3ff09b5f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.327286 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "77765e97-6296-4f5f-83f0-9ff3ff09b5f2" (UID: "77765e97-6296-4f5f-83f0-9ff3ff09b5f2"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.340378 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-config-data" (OuterVolumeSpecName: "config-data") pod "77765e97-6296-4f5f-83f0-9ff3ff09b5f2" (UID: "77765e97-6296-4f5f-83f0-9ff3ff09b5f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.354209 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-fc2xz"] Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.354406 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-kube-api-access-hv7sg" (OuterVolumeSpecName: "kube-api-access-hv7sg") pod "77765e97-6296-4f5f-83f0-9ff3ff09b5f2" (UID: "77765e97-6296-4f5f-83f0-9ff3ff09b5f2"). InnerVolumeSpecName "kube-api-access-hv7sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.393922 4730 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.393953 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.393979 4730 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.393989 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.393997 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv7sg\" (UniqueName: \"kubernetes.io/projected/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-kube-api-access-hv7sg\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.394006 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.394013 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77765e97-6296-4f5f-83f0-9ff3ff09b5f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.422960 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-fc2xz"] Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.461912 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67744bc4b5-tg4xw"] Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.493271 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.507653 4730 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.528524 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.555927 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69df784bcc-98p6s"] Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.573291 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78dd7cd7dc-htltf"] Jan 31 16:45:57 crc kubenswrapper[4730]: E0131 16:45:57.573772 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b455578a-dbb5-4775-acb1-02640d25619c" containerName="init" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.573784 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b455578a-dbb5-4775-acb1-02640d25619c" containerName="init" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.573982 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="b455578a-dbb5-4775-acb1-02640d25619c" containerName="init" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.574880 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.599888 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78dd7cd7dc-htltf"] Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.602447 4730 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.705471 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c12546ea-8841-46b2-abea-fd330847d69d-logs\") pod \"horizon-78dd7cd7dc-htltf\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.715533 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmwjk\" (UniqueName: \"kubernetes.io/projected/c12546ea-8841-46b2-abea-fd330847d69d-kube-api-access-qmwjk\") pod \"horizon-78dd7cd7dc-htltf\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.722994 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c12546ea-8841-46b2-abea-fd330847d69d-horizon-secret-key\") pod \"horizon-78dd7cd7dc-htltf\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.723110 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c12546ea-8841-46b2-abea-fd330847d69d-config-data\") pod \"horizon-78dd7cd7dc-htltf\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.723202 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c12546ea-8841-46b2-abea-fd330847d69d-scripts\") pod \"horizon-78dd7cd7dc-htltf\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.787198 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.830959 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c12546ea-8841-46b2-abea-fd330847d69d-config-data\") pod \"horizon-78dd7cd7dc-htltf\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.831027 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c12546ea-8841-46b2-abea-fd330847d69d-scripts\") pod \"horizon-78dd7cd7dc-htltf\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.831097 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c12546ea-8841-46b2-abea-fd330847d69d-logs\") pod \"horizon-78dd7cd7dc-htltf\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.831128 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmwjk\" (UniqueName: \"kubernetes.io/projected/c12546ea-8841-46b2-abea-fd330847d69d-kube-api-access-qmwjk\") pod \"horizon-78dd7cd7dc-htltf\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.831174 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c12546ea-8841-46b2-abea-fd330847d69d-horizon-secret-key\") pod \"horizon-78dd7cd7dc-htltf\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.839514 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c12546ea-8841-46b2-abea-fd330847d69d-logs\") pod \"horizon-78dd7cd7dc-htltf\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.840596 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c12546ea-8841-46b2-abea-fd330847d69d-scripts\") pod \"horizon-78dd7cd7dc-htltf\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.856194 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c12546ea-8841-46b2-abea-fd330847d69d-config-data\") pod \"horizon-78dd7cd7dc-htltf\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.877908 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c12546ea-8841-46b2-abea-fd330847d69d-horizon-secret-key\") pod \"horizon-78dd7cd7dc-htltf\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.892477 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmwjk\" (UniqueName: \"kubernetes.io/projected/c12546ea-8841-46b2-abea-fd330847d69d-kube-api-access-qmwjk\") pod \"horizon-78dd7cd7dc-htltf\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:45:57 crc kubenswrapper[4730]: I0131 16:45:57.935719 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:45:58 crc kubenswrapper[4730]: E0131 16:45:58.035154 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfda90ccb_0cf0_45d3_88fd_c795848c9482.slice/crio-ef85d07507057f4928024ac6405d8a3ac1edabde879bdf943b3e55ec917c9548.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfda90ccb_0cf0_45d3_88fd_c795848c9482.slice/crio-conmon-ef85d07507057f4928024ac6405d8a3ac1edabde879bdf943b3e55ec917c9548.scope\": RecentStats: unable to find data in memory cache]" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.164597 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1227fd84-b580-4bc1-84eb-2f90802a4a3d","Type":"ContainerStarted","Data":"579c02fe71555186eaa301d460761f0387a98635d65875f4c48f6e1b328f44a1"} Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.170299 4730 generic.go:334] "Generic (PLEG): container finished" podID="fda90ccb-0cf0-45d3-88fd-c795848c9482" containerID="ef85d07507057f4928024ac6405d8a3ac1edabde879bdf943b3e55ec917c9548" exitCode=0 Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.170341 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-p65js" event={"ID":"fda90ccb-0cf0-45d3-88fd-c795848c9482","Type":"ContainerDied","Data":"ef85d07507057f4928024ac6405d8a3ac1edabde879bdf943b3e55ec917c9548"} Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.170356 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-p65js" event={"ID":"fda90ccb-0cf0-45d3-88fd-c795848c9482","Type":"ContainerStarted","Data":"7ab3cb07e6a4b20cd907d9355b4f8d5da0969458d02e81651d823d3159814309"} Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.184190 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69df784bcc-98p6s" event={"ID":"00791e2a-6f2b-450d-acab-1ac4b91656ea","Type":"ContainerStarted","Data":"7e1eae4eecd4806690635b1764262ee02692da8fa85829dd9c5b7fee7fd59e65"} Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.210452 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.246453 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wkj2z" event={"ID":"2fd279f9-efa4-4fb3-a6e0-655de1c20403","Type":"ContainerStarted","Data":"f6636f03e325872a2851cee2d06ab60d21eea7051adeb3e114ef9f95ce5dc4b8"} Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.335599 4730 generic.go:334] "Generic (PLEG): container finished" podID="a81eb20f-04f9-4f66-b19a-19cd06c28329" containerID="41a686f2464e22e3ad094b3ba86d11e87ed5255556ef8f43e2a7d3e8a3082d2f" exitCode=0 Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.335712 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.343164 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hvq" event={"ID":"a81eb20f-04f9-4f66-b19a-19cd06c28329","Type":"ContainerDied","Data":"41a686f2464e22e3ad094b3ba86d11e87ed5255556ef8f43e2a7d3e8a3082d2f"} Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.360545 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hz9x\" (UniqueName: \"kubernetes.io/projected/a613cf58-5b4f-4444-89b6-9c8cd68325b0-kube-api-access-7hz9x\") pod \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.360727 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-ovsdbserver-nb\") pod \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.360981 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-config\") pod \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.361078 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-ovsdbserver-sb\") pod \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.361175 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-dns-svc\") pod \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\" (UID: \"a613cf58-5b4f-4444-89b6-9c8cd68325b0\") " Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.397731 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a613cf58-5b4f-4444-89b6-9c8cd68325b0-kube-api-access-7hz9x" (OuterVolumeSpecName: "kube-api-access-7hz9x") pod "a613cf58-5b4f-4444-89b6-9c8cd68325b0" (UID: "a613cf58-5b4f-4444-89b6-9c8cd68325b0"). InnerVolumeSpecName "kube-api-access-7hz9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.469864 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hz9x\" (UniqueName: \"kubernetes.io/projected/a613cf58-5b4f-4444-89b6-9c8cd68325b0-kube-api-access-7hz9x\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.485881 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a613cf58-5b4f-4444-89b6-9c8cd68325b0" (UID: "a613cf58-5b4f-4444-89b6-9c8cd68325b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.504107 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b455578a-dbb5-4775-acb1-02640d25619c" path="/var/lib/kubelet/pods/b455578a-dbb5-4775-acb1-02640d25619c/volumes" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.506520 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.524977 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a613cf58-5b4f-4444-89b6-9c8cd68325b0" (UID: "a613cf58-5b4f-4444-89b6-9c8cd68325b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.543298 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.551451 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a613cf58-5b4f-4444-89b6-9c8cd68325b0" (UID: "a613cf58-5b4f-4444-89b6-9c8cd68325b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.553070 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66hvq" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.557560 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-config" (OuterVolumeSpecName: "config") pod "a613cf58-5b4f-4444-89b6-9c8cd68325b0" (UID: "a613cf58-5b4f-4444-89b6-9c8cd68325b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.565853 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:45:58 crc kubenswrapper[4730]: E0131 16:45:58.566520 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81eb20f-04f9-4f66-b19a-19cd06c28329" containerName="registry-server" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.566533 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81eb20f-04f9-4f66-b19a-19cd06c28329" containerName="registry-server" Jan 31 16:45:58 crc kubenswrapper[4730]: E0131 16:45:58.566550 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81eb20f-04f9-4f66-b19a-19cd06c28329" containerName="extract-content" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.566556 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81eb20f-04f9-4f66-b19a-19cd06c28329" containerName="extract-content" Jan 31 16:45:58 crc kubenswrapper[4730]: E0131 16:45:58.566568 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81eb20f-04f9-4f66-b19a-19cd06c28329" containerName="extract-utilities" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.566578 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81eb20f-04f9-4f66-b19a-19cd06c28329" containerName="extract-utilities" Jan 31 16:45:58 crc kubenswrapper[4730]: E0131 16:45:58.566591 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a613cf58-5b4f-4444-89b6-9c8cd68325b0" containerName="init" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.566597 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a613cf58-5b4f-4444-89b6-9c8cd68325b0" containerName="init" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.566759 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a81eb20f-04f9-4f66-b19a-19cd06c28329" containerName="registry-server" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.566785 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a613cf58-5b4f-4444-89b6-9c8cd68325b0" containerName="init" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.567690 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.570054 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.571399 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.571419 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.571429 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.571438 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a613cf58-5b4f-4444-89b6-9c8cd68325b0-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.596373 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.673091 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a81eb20f-04f9-4f66-b19a-19cd06c28329-catalog-content\") pod \"a81eb20f-04f9-4f66-b19a-19cd06c28329\" (UID: \"a81eb20f-04f9-4f66-b19a-19cd06c28329\") " Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.673181 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cczx\" (UniqueName: \"kubernetes.io/projected/a81eb20f-04f9-4f66-b19a-19cd06c28329-kube-api-access-7cczx\") pod \"a81eb20f-04f9-4f66-b19a-19cd06c28329\" (UID: \"a81eb20f-04f9-4f66-b19a-19cd06c28329\") " Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.673946 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a81eb20f-04f9-4f66-b19a-19cd06c28329-utilities\") pod \"a81eb20f-04f9-4f66-b19a-19cd06c28329\" (UID: \"a81eb20f-04f9-4f66-b19a-19cd06c28329\") " Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.674305 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.674393 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnwcr\" (UniqueName: \"kubernetes.io/projected/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-kube-api-access-jnwcr\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.674415 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-logs\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.674453 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.674476 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.674524 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-config-data\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.674544 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-scripts\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.674669 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a81eb20f-04f9-4f66-b19a-19cd06c28329-utilities" (OuterVolumeSpecName: "utilities") pod "a81eb20f-04f9-4f66-b19a-19cd06c28329" (UID: "a81eb20f-04f9-4f66-b19a-19cd06c28329"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.684109 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a81eb20f-04f9-4f66-b19a-19cd06c28329-kube-api-access-7cczx" (OuterVolumeSpecName: "kube-api-access-7cczx") pod "a81eb20f-04f9-4f66-b19a-19cd06c28329" (UID: "a81eb20f-04f9-4f66-b19a-19cd06c28329"). InnerVolumeSpecName "kube-api-access-7cczx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.776093 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-config-data\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.776966 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-scripts\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.777089 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.777166 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnwcr\" (UniqueName: \"kubernetes.io/projected/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-kube-api-access-jnwcr\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.777187 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-logs\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.777211 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.777249 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.777299 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cczx\" (UniqueName: \"kubernetes.io/projected/a81eb20f-04f9-4f66-b19a-19cd06c28329-kube-api-access-7cczx\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.777328 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a81eb20f-04f9-4f66-b19a-19cd06c28329-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.777544 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.778427 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-logs\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.778662 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.792018 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-config-data\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.795090 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.795902 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a81eb20f-04f9-4f66-b19a-19cd06c28329-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a81eb20f-04f9-4f66-b19a-19cd06c28329" (UID: "a81eb20f-04f9-4f66-b19a-19cd06c28329"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.824186 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnwcr\" (UniqueName: \"kubernetes.io/projected/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-kube-api-access-jnwcr\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.828340 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-scripts\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.832775 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78dd7cd7dc-htltf"] Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.878769 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a81eb20f-04f9-4f66-b19a-19cd06c28329-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:45:58 crc kubenswrapper[4730]: I0131 16:45:58.921426 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " pod="openstack/glance-default-external-api-0" Jan 31 16:45:59 crc kubenswrapper[4730]: I0131 16:45:59.208312 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 16:45:59 crc kubenswrapper[4730]: I0131 16:45:59.428967 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784f69c749-8tbtm" event={"ID":"a613cf58-5b4f-4444-89b6-9c8cd68325b0","Type":"ContainerDied","Data":"2ec0978467dfd4b8f1b8a2b0e0886cc03d53aaa55dec21c88ef8aa8b7ae04094"} Jan 31 16:45:59 crc kubenswrapper[4730]: I0131 16:45:59.429303 4730 scope.go:117] "RemoveContainer" containerID="ababe122c802334635f196d2267768bc3dbb5513f71b72e1279e32ec8015e785" Jan 31 16:45:59 crc kubenswrapper[4730]: I0131 16:45:59.429577 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784f69c749-8tbtm" Jan 31 16:45:59 crc kubenswrapper[4730]: I0131 16:45:59.438479 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-p65js" event={"ID":"fda90ccb-0cf0-45d3-88fd-c795848c9482","Type":"ContainerStarted","Data":"d8f42234aab662bc2c7f5c48364061d157e6e950051ecf5f3eb127abe97c74d9"} Jan 31 16:45:59 crc kubenswrapper[4730]: I0131 16:45:59.438703 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:45:59 crc kubenswrapper[4730]: I0131 16:45:59.440896 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78dd7cd7dc-htltf" event={"ID":"c12546ea-8841-46b2-abea-fd330847d69d","Type":"ContainerStarted","Data":"b58ea3e7831caa169c8a86e438c95683d963ca4b88f7cddda1824eed09e6cb0b"} Jan 31 16:45:59 crc kubenswrapper[4730]: I0131 16:45:59.453974 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hvq" event={"ID":"a81eb20f-04f9-4f66-b19a-19cd06c28329","Type":"ContainerDied","Data":"52b383197bf3e164b055b20a5e6f23bc14c2950863894e6a9cf3577715d1a12c"} Jan 31 16:45:59 crc kubenswrapper[4730]: I0131 16:45:59.454094 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66hvq" Jan 31 16:45:59 crc kubenswrapper[4730]: I0131 16:45:59.482431 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84976bdf-p65js" podStartSLOduration=5.482411023 podStartE2EDuration="5.482411023s" podCreationTimestamp="2026-01-31 16:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:45:59.459918223 +0000 UTC m=+946.265975129" watchObservedRunningTime="2026-01-31 16:45:59.482411023 +0000 UTC m=+946.288467939" Jan 31 16:45:59 crc kubenswrapper[4730]: I0131 16:45:59.536967 4730 scope.go:117] "RemoveContainer" containerID="41a686f2464e22e3ad094b3ba86d11e87ed5255556ef8f43e2a7d3e8a3082d2f" Jan 31 16:45:59 crc kubenswrapper[4730]: I0131 16:45:59.537773 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784f69c749-8tbtm"] Jan 31 16:45:59 crc kubenswrapper[4730]: I0131 16:45:59.560493 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-784f69c749-8tbtm"] Jan 31 16:45:59 crc kubenswrapper[4730]: I0131 16:45:59.571007 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-66hvq"] Jan 31 16:45:59 crc kubenswrapper[4730]: I0131 16:45:59.587001 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-66hvq"] Jan 31 16:45:59 crc kubenswrapper[4730]: I0131 16:45:59.611971 4730 scope.go:117] "RemoveContainer" containerID="3473a981f3486e6e812449f116a0c531face37ef015ae7a5ccaed295b2740319" Jan 31 16:45:59 crc kubenswrapper[4730]: I0131 16:45:59.738599 4730 scope.go:117] "RemoveContainer" containerID="e9e82a70cdbcbbab3aaa14c3bfef2ac97b4fcdcd8d1169621154defaa05eed7f" Jan 31 16:46:00 crc kubenswrapper[4730]: I0131 16:45:59.918416 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:46:00 crc kubenswrapper[4730]: I0131 16:46:00.506647 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77765e97-6296-4f5f-83f0-9ff3ff09b5f2" path="/var/lib/kubelet/pods/77765e97-6296-4f5f-83f0-9ff3ff09b5f2/volumes" Jan 31 16:46:00 crc kubenswrapper[4730]: I0131 16:46:00.507325 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a613cf58-5b4f-4444-89b6-9c8cd68325b0" path="/var/lib/kubelet/pods/a613cf58-5b4f-4444-89b6-9c8cd68325b0/volumes" Jan 31 16:46:00 crc kubenswrapper[4730]: I0131 16:46:00.508931 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a81eb20f-04f9-4f66-b19a-19cd06c28329" path="/var/lib/kubelet/pods/a81eb20f-04f9-4f66-b19a-19cd06c28329/volumes" Jan 31 16:46:00 crc kubenswrapper[4730]: I0131 16:46:00.533885 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1227fd84-b580-4bc1-84eb-2f90802a4a3d","Type":"ContainerStarted","Data":"7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb"} Jan 31 16:46:00 crc kubenswrapper[4730]: I0131 16:46:00.540416 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ac73d0f-0df7-45b9-a18a-04af48d9ac91","Type":"ContainerStarted","Data":"5cb830d41e207c1511c12d07ceecb5026c7767fde002a8bf06d669c47a7dd052"} Jan 31 16:46:01 crc kubenswrapper[4730]: I0131 16:46:01.572914 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1227fd84-b580-4bc1-84eb-2f90802a4a3d","Type":"ContainerStarted","Data":"534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40"} Jan 31 16:46:01 crc kubenswrapper[4730]: I0131 16:46:01.573358 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1227fd84-b580-4bc1-84eb-2f90802a4a3d" containerName="glance-log" containerID="cri-o://7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb" gracePeriod=30 Jan 31 16:46:01 crc kubenswrapper[4730]: I0131 16:46:01.573657 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1227fd84-b580-4bc1-84eb-2f90802a4a3d" containerName="glance-httpd" containerID="cri-o://534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40" gracePeriod=30 Jan 31 16:46:01 crc kubenswrapper[4730]: I0131 16:46:01.576739 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ac73d0f-0df7-45b9-a18a-04af48d9ac91","Type":"ContainerStarted","Data":"19fb7ac2d691d8e0a4d3b8cb0915d0cef7b77d3bce1ba029cb3e8c3478e883e8"} Jan 31 16:46:01 crc kubenswrapper[4730]: I0131 16:46:01.604188 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.604170577 podStartE2EDuration="6.604170577s" podCreationTimestamp="2026-01-31 16:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:46:01.589260263 +0000 UTC m=+948.395317169" watchObservedRunningTime="2026-01-31 16:46:01.604170577 +0000 UTC m=+948.410227493" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.294607 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.390876 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1227fd84-b580-4bc1-84eb-2f90802a4a3d-logs\") pod \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.391092 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhktw\" (UniqueName: \"kubernetes.io/projected/1227fd84-b580-4bc1-84eb-2f90802a4a3d-kube-api-access-qhktw\") pod \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.391125 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.391160 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-config-data\") pod \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.391178 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-combined-ca-bundle\") pod \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.391257 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-scripts\") pod \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.391340 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1227fd84-b580-4bc1-84eb-2f90802a4a3d-httpd-run\") pod \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\" (UID: \"1227fd84-b580-4bc1-84eb-2f90802a4a3d\") " Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.391415 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1227fd84-b580-4bc1-84eb-2f90802a4a3d-logs" (OuterVolumeSpecName: "logs") pod "1227fd84-b580-4bc1-84eb-2f90802a4a3d" (UID: "1227fd84-b580-4bc1-84eb-2f90802a4a3d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.392186 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1227fd84-b580-4bc1-84eb-2f90802a4a3d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1227fd84-b580-4bc1-84eb-2f90802a4a3d" (UID: "1227fd84-b580-4bc1-84eb-2f90802a4a3d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.398718 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-scripts" (OuterVolumeSpecName: "scripts") pod "1227fd84-b580-4bc1-84eb-2f90802a4a3d" (UID: "1227fd84-b580-4bc1-84eb-2f90802a4a3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.399331 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1227fd84-b580-4bc1-84eb-2f90802a4a3d-kube-api-access-qhktw" (OuterVolumeSpecName: "kube-api-access-qhktw") pod "1227fd84-b580-4bc1-84eb-2f90802a4a3d" (UID: "1227fd84-b580-4bc1-84eb-2f90802a4a3d"). InnerVolumeSpecName "kube-api-access-qhktw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.399716 4730 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1227fd84-b580-4bc1-84eb-2f90802a4a3d-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.399770 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1227fd84-b580-4bc1-84eb-2f90802a4a3d-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.399786 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhktw\" (UniqueName: \"kubernetes.io/projected/1227fd84-b580-4bc1-84eb-2f90802a4a3d-kube-api-access-qhktw\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.399796 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.399916 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "1227fd84-b580-4bc1-84eb-2f90802a4a3d" (UID: "1227fd84-b580-4bc1-84eb-2f90802a4a3d"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.427363 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1227fd84-b580-4bc1-84eb-2f90802a4a3d" (UID: "1227fd84-b580-4bc1-84eb-2f90802a4a3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.470994 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-config-data" (OuterVolumeSpecName: "config-data") pod "1227fd84-b580-4bc1-84eb-2f90802a4a3d" (UID: "1227fd84-b580-4bc1-84eb-2f90802a4a3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.505885 4730 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.506071 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.507311 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1227fd84-b580-4bc1-84eb-2f90802a4a3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.534260 4730 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.591585 4730 generic.go:334] "Generic (PLEG): container finished" podID="1227fd84-b580-4bc1-84eb-2f90802a4a3d" containerID="534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40" exitCode=143 Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.591610 4730 generic.go:334] "Generic (PLEG): container finished" podID="1227fd84-b580-4bc1-84eb-2f90802a4a3d" containerID="7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb" exitCode=143 Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.591627 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1227fd84-b580-4bc1-84eb-2f90802a4a3d","Type":"ContainerDied","Data":"534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40"} Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.591650 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1227fd84-b580-4bc1-84eb-2f90802a4a3d","Type":"ContainerDied","Data":"7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb"} Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.591697 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1227fd84-b580-4bc1-84eb-2f90802a4a3d","Type":"ContainerDied","Data":"579c02fe71555186eaa301d460761f0387a98635d65875f4c48f6e1b328f44a1"} Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.591715 4730 scope.go:117] "RemoveContainer" containerID="534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.591803 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.612543 4730 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.642579 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.650536 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.656395 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:46:02 crc kubenswrapper[4730]: E0131 16:46:02.656683 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1227fd84-b580-4bc1-84eb-2f90802a4a3d" containerName="glance-log" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.656694 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="1227fd84-b580-4bc1-84eb-2f90802a4a3d" containerName="glance-log" Jan 31 16:46:02 crc kubenswrapper[4730]: E0131 16:46:02.656708 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1227fd84-b580-4bc1-84eb-2f90802a4a3d" containerName="glance-httpd" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.656714 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="1227fd84-b580-4bc1-84eb-2f90802a4a3d" containerName="glance-httpd" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.657054 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="1227fd84-b580-4bc1-84eb-2f90802a4a3d" containerName="glance-httpd" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.657064 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="1227fd84-b580-4bc1-84eb-2f90802a4a3d" containerName="glance-log" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.658107 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.661139 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.666716 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.677198 4730 scope.go:117] "RemoveContainer" containerID="7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.715543 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.715584 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.715637 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.715660 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.715675 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm78j\" (UniqueName: \"kubernetes.io/projected/bc03728a-57e1-497c-be93-b5a6dc008b28-kube-api-access-sm78j\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.715844 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc03728a-57e1-497c-be93-b5a6dc008b28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.715887 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc03728a-57e1-497c-be93-b5a6dc008b28-logs\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.726970 4730 scope.go:117] "RemoveContainer" containerID="534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40" Jan 31 16:46:02 crc kubenswrapper[4730]: E0131 16:46:02.727368 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40\": container with ID starting with 534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40 not found: ID does not exist" containerID="534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.727423 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40"} err="failed to get container status \"534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40\": rpc error: code = NotFound desc = could not find container \"534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40\": container with ID starting with 534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40 not found: ID does not exist" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.727457 4730 scope.go:117] "RemoveContainer" containerID="7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb" Jan 31 16:46:02 crc kubenswrapper[4730]: E0131 16:46:02.728396 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb\": container with ID starting with 7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb not found: ID does not exist" containerID="7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.728435 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb"} err="failed to get container status \"7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb\": rpc error: code = NotFound desc = could not find container \"7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb\": container with ID starting with 7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb not found: ID does not exist" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.728465 4730 scope.go:117] "RemoveContainer" containerID="534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.729055 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40"} err="failed to get container status \"534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40\": rpc error: code = NotFound desc = could not find container \"534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40\": container with ID starting with 534a4bdc4de1bc8529bd4fd2a62047ec7660e872ecac93140e2b0f0de4ea7a40 not found: ID does not exist" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.729081 4730 scope.go:117] "RemoveContainer" containerID="7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.729360 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb"} err="failed to get container status \"7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb\": rpc error: code = NotFound desc = could not find container \"7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb\": container with ID starting with 7e91d5d190706311d7155d52b404449930947c24673e5236a6ebdeb103af3ceb not found: ID does not exist" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.817224 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc03728a-57e1-497c-be93-b5a6dc008b28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.817297 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc03728a-57e1-497c-be93-b5a6dc008b28-logs\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.817581 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.817660 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.817697 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc03728a-57e1-497c-be93-b5a6dc008b28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.817740 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc03728a-57e1-497c-be93-b5a6dc008b28-logs\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.817735 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.817788 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.817829 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm78j\" (UniqueName: \"kubernetes.io/projected/bc03728a-57e1-497c-be93-b5a6dc008b28-kube-api-access-sm78j\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.818343 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.824480 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.835982 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.838747 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.848099 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm78j\" (UniqueName: \"kubernetes.io/projected/bc03728a-57e1-497c-be93-b5a6dc008b28-kube-api-access-sm78j\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.867606 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:02 crc kubenswrapper[4730]: I0131 16:46:02.977261 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 16:46:03 crc kubenswrapper[4730]: I0131 16:46:03.629133 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ac73d0f-0df7-45b9-a18a-04af48d9ac91","Type":"ContainerStarted","Data":"988d4b3b3c83b0740047b7949c589603b13e9c345704cf2773c003f13f765598"} Jan 31 16:46:03 crc kubenswrapper[4730]: I0131 16:46:03.632696 4730 generic.go:334] "Generic (PLEG): container finished" podID="bc0c867e-0453-4770-889a-6d7c6ed361da" containerID="8ab434ed6c460a0441f280ba8e6c81a3b4d8478e9ee9f29f20f740e872a262ef" exitCode=0 Jan 31 16:46:03 crc kubenswrapper[4730]: I0131 16:46:03.632734 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5lwz6" event={"ID":"bc0c867e-0453-4770-889a-6d7c6ed361da","Type":"ContainerDied","Data":"8ab434ed6c460a0441f280ba8e6c81a3b4d8478e9ee9f29f20f740e872a262ef"} Jan 31 16:46:03 crc kubenswrapper[4730]: I0131 16:46:03.671710 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.671691973 podStartE2EDuration="5.671691973s" podCreationTimestamp="2026-01-31 16:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:46:03.66923286 +0000 UTC m=+950.475289766" watchObservedRunningTime="2026-01-31 16:46:03.671691973 +0000 UTC m=+950.477748889" Jan 31 16:46:04 crc kubenswrapper[4730]: I0131 16:46:04.338151 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:46:04 crc kubenswrapper[4730]: I0131 16:46:04.394367 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:46:04 crc kubenswrapper[4730]: I0131 16:46:04.494109 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1227fd84-b580-4bc1-84eb-2f90802a4a3d" path="/var/lib/kubelet/pods/1227fd84-b580-4bc1-84eb-2f90802a4a3d/volumes" Jan 31 16:46:04 crc kubenswrapper[4730]: I0131 16:46:04.494760 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:46:05 crc kubenswrapper[4730]: I0131 16:46:05.442763 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:46:05 crc kubenswrapper[4730]: I0131 16:46:05.497169 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7w5f2"] Jan 31 16:46:05 crc kubenswrapper[4730]: I0131 16:46:05.497586 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-7w5f2" podUID="24ce46a6-467c-4c82-9f68-900abb2601e1" containerName="dnsmasq-dns" containerID="cri-o://6261f1eb4a5de0d08c20c1d2d6ba279f9b66d002c903f34d066f4ece82535d1a" gracePeriod=10 Jan 31 16:46:05 crc kubenswrapper[4730]: I0131 16:46:05.653067 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6ac73d0f-0df7-45b9-a18a-04af48d9ac91" containerName="glance-log" containerID="cri-o://19fb7ac2d691d8e0a4d3b8cb0915d0cef7b77d3bce1ba029cb3e8c3478e883e8" gracePeriod=30 Jan 31 16:46:05 crc kubenswrapper[4730]: I0131 16:46:05.653482 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6ac73d0f-0df7-45b9-a18a-04af48d9ac91" containerName="glance-httpd" containerID="cri-o://988d4b3b3c83b0740047b7949c589603b13e9c345704cf2773c003f13f765598" gracePeriod=30 Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.064111 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69df784bcc-98p6s"] Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.099552 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-b5bd455fb-h66br"] Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.100916 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.109659 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.130762 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b5bd455fb-h66br"] Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.205772 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78dd7cd7dc-htltf"] Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.263684 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7788464654-cr95d"] Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.264978 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.280378 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-horizon-tls-certs\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.280431 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-scripts\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.280456 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sgxt\" (UniqueName: \"kubernetes.io/projected/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-kube-api-access-4sgxt\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.280479 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-config-data\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.280496 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-horizon-secret-key\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.280582 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-logs\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.280597 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-combined-ca-bundle\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.307372 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7788464654-cr95d"] Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.382324 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0374cd2d-1d23-4f00-893a-278af887d99b-scripts\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.382411 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-logs\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.382433 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-combined-ca-bundle\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.382484 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-horizon-tls-certs\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.382505 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmtn5\" (UniqueName: \"kubernetes.io/projected/0374cd2d-1d23-4f00-893a-278af887d99b-kube-api-access-tmtn5\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.382533 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0374cd2d-1d23-4f00-893a-278af887d99b-horizon-secret-key\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.382568 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-scripts\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.382599 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0374cd2d-1d23-4f00-893a-278af887d99b-combined-ca-bundle\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.382623 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sgxt\" (UniqueName: \"kubernetes.io/projected/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-kube-api-access-4sgxt\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.382659 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-config-data\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.382685 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-horizon-secret-key\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.382706 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0374cd2d-1d23-4f00-893a-278af887d99b-config-data\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.382737 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0374cd2d-1d23-4f00-893a-278af887d99b-horizon-tls-certs\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.382759 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0374cd2d-1d23-4f00-893a-278af887d99b-logs\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.383531 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-scripts\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.391358 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-logs\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.403563 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-combined-ca-bundle\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.404244 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-config-data\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.407555 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-horizon-secret-key\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.434252 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-horizon-tls-certs\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.440363 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sgxt\" (UniqueName: \"kubernetes.io/projected/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-kube-api-access-4sgxt\") pod \"horizon-b5bd455fb-h66br\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.500592 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmtn5\" (UniqueName: \"kubernetes.io/projected/0374cd2d-1d23-4f00-893a-278af887d99b-kube-api-access-tmtn5\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.500653 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0374cd2d-1d23-4f00-893a-278af887d99b-horizon-secret-key\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.500713 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0374cd2d-1d23-4f00-893a-278af887d99b-combined-ca-bundle\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.500769 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0374cd2d-1d23-4f00-893a-278af887d99b-config-data\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.500827 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0374cd2d-1d23-4f00-893a-278af887d99b-horizon-tls-certs\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.500844 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0374cd2d-1d23-4f00-893a-278af887d99b-logs\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.500953 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0374cd2d-1d23-4f00-893a-278af887d99b-scripts\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.501662 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0374cd2d-1d23-4f00-893a-278af887d99b-scripts\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.505511 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0374cd2d-1d23-4f00-893a-278af887d99b-config-data\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.509080 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0374cd2d-1d23-4f00-893a-278af887d99b-logs\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.512346 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0374cd2d-1d23-4f00-893a-278af887d99b-horizon-tls-certs\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.512691 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0374cd2d-1d23-4f00-893a-278af887d99b-horizon-secret-key\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.533416 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0374cd2d-1d23-4f00-893a-278af887d99b-combined-ca-bundle\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.533822 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmtn5\" (UniqueName: \"kubernetes.io/projected/0374cd2d-1d23-4f00-893a-278af887d99b-kube-api-access-tmtn5\") pod \"horizon-7788464654-cr95d\" (UID: \"0374cd2d-1d23-4f00-893a-278af887d99b\") " pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.619243 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.667634 4730 generic.go:334] "Generic (PLEG): container finished" podID="6ac73d0f-0df7-45b9-a18a-04af48d9ac91" containerID="988d4b3b3c83b0740047b7949c589603b13e9c345704cf2773c003f13f765598" exitCode=0 Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.667662 4730 generic.go:334] "Generic (PLEG): container finished" podID="6ac73d0f-0df7-45b9-a18a-04af48d9ac91" containerID="19fb7ac2d691d8e0a4d3b8cb0915d0cef7b77d3bce1ba029cb3e8c3478e883e8" exitCode=143 Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.667700 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ac73d0f-0df7-45b9-a18a-04af48d9ac91","Type":"ContainerDied","Data":"988d4b3b3c83b0740047b7949c589603b13e9c345704cf2773c003f13f765598"} Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.667725 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ac73d0f-0df7-45b9-a18a-04af48d9ac91","Type":"ContainerDied","Data":"19fb7ac2d691d8e0a4d3b8cb0915d0cef7b77d3bce1ba029cb3e8c3478e883e8"} Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.671387 4730 generic.go:334] "Generic (PLEG): container finished" podID="24ce46a6-467c-4c82-9f68-900abb2601e1" containerID="6261f1eb4a5de0d08c20c1d2d6ba279f9b66d002c903f34d066f4ece82535d1a" exitCode=0 Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.671418 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7w5f2" event={"ID":"24ce46a6-467c-4c82-9f68-900abb2601e1","Type":"ContainerDied","Data":"6261f1eb4a5de0d08c20c1d2d6ba279f9b66d002c903f34d066f4ece82535d1a"} Jan 31 16:46:06 crc kubenswrapper[4730]: I0131 16:46:06.733007 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:08 crc kubenswrapper[4730]: I0131 16:46:08.464223 4730 scope.go:117] "RemoveContainer" containerID="786f8582b1d464af042106b58dc4a961d37e50defef7db41bb247eaa82ebf765" Jan 31 16:46:08 crc kubenswrapper[4730]: I0131 16:46:08.464504 4730 scope.go:117] "RemoveContainer" containerID="2102570acd0d4063edb8ff73bbc2db62d76245e54759273f9b6b29e86aa93a9b" Jan 31 16:46:08 crc kubenswrapper[4730]: I0131 16:46:08.464601 4730 scope.go:117] "RemoveContainer" containerID="97a0f22ff3ede34052fb983fbbdc8c26473187f948f5a0bcbbcd93e6b7bb8326" Jan 31 16:46:08 crc kubenswrapper[4730]: I0131 16:46:08.847428 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:46:08 crc kubenswrapper[4730]: E0131 16:46:08.847608 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 16:46:08 crc kubenswrapper[4730]: E0131 16:46:08.847655 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 16:47:12.84764162 +0000 UTC m=+1019.653698536 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 16:46:09 crc kubenswrapper[4730]: I0131 16:46:09.829667 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7w5f2" podUID="24ce46a6-467c-4c82-9f68-900abb2601e1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 31 16:46:14 crc kubenswrapper[4730]: I0131 16:46:14.830426 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7w5f2" podUID="24ce46a6-467c-4c82-9f68-900abb2601e1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 31 16:46:16 crc kubenswrapper[4730]: W0131 16:46:16.008113 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc03728a_57e1_497c_be93_b5a6dc008b28.slice/crio-334712edf5809b8a522dcdeac989fe28a561f77118bd2fa3b96d3095d11e339b WatchSource:0}: Error finding container 334712edf5809b8a522dcdeac989fe28a561f77118bd2fa3b96d3095d11e339b: Status 404 returned error can't find the container with id 334712edf5809b8a522dcdeac989fe28a561f77118bd2fa3b96d3095d11e339b Jan 31 16:46:16 crc kubenswrapper[4730]: E0131 16:46:16.412168 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 31 16:46:16 crc kubenswrapper[4730]: E0131 16:46:16.412347 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ncdh677h57ch648h66bh5b8h7dh5cch59fh65bh557h59ch9h6fh5cfh76h64dh87h577h69h9dh567hddh5d4h644h59dh59fh5bfh58fh67fh67fh555q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87zjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f0d3583d-f56f-4f4b-87cb-e748976d47f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.576749 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.658600 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-fernet-keys\") pod \"bc0c867e-0453-4770-889a-6d7c6ed361da\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.658956 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znc9p\" (UniqueName: \"kubernetes.io/projected/bc0c867e-0453-4770-889a-6d7c6ed361da-kube-api-access-znc9p\") pod \"bc0c867e-0453-4770-889a-6d7c6ed361da\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.659001 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-combined-ca-bundle\") pod \"bc0c867e-0453-4770-889a-6d7c6ed361da\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.659098 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-credential-keys\") pod \"bc0c867e-0453-4770-889a-6d7c6ed361da\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.659172 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-config-data\") pod \"bc0c867e-0453-4770-889a-6d7c6ed361da\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.659223 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-scripts\") pod \"bc0c867e-0453-4770-889a-6d7c6ed361da\" (UID: \"bc0c867e-0453-4770-889a-6d7c6ed361da\") " Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.680065 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-scripts" (OuterVolumeSpecName: "scripts") pod "bc0c867e-0453-4770-889a-6d7c6ed361da" (UID: "bc0c867e-0453-4770-889a-6d7c6ed361da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.713934 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc0c867e-0453-4770-889a-6d7c6ed361da-kube-api-access-znc9p" (OuterVolumeSpecName: "kube-api-access-znc9p") pod "bc0c867e-0453-4770-889a-6d7c6ed361da" (UID: "bc0c867e-0453-4770-889a-6d7c6ed361da"). InnerVolumeSpecName "kube-api-access-znc9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.716143 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bc0c867e-0453-4770-889a-6d7c6ed361da" (UID: "bc0c867e-0453-4770-889a-6d7c6ed361da"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.718095 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bc0c867e-0453-4770-889a-6d7c6ed361da" (UID: "bc0c867e-0453-4770-889a-6d7c6ed361da"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.720965 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc0c867e-0453-4770-889a-6d7c6ed361da" (UID: "bc0c867e-0453-4770-889a-6d7c6ed361da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.766376 4730 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.766410 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znc9p\" (UniqueName: \"kubernetes.io/projected/bc0c867e-0453-4770-889a-6d7c6ed361da-kube-api-access-znc9p\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.766423 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.766433 4730 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.766443 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.767457 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-config-data" (OuterVolumeSpecName: "config-data") pod "bc0c867e-0453-4770-889a-6d7c6ed361da" (UID: "bc0c867e-0453-4770-889a-6d7c6ed361da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.790090 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bc03728a-57e1-497c-be93-b5a6dc008b28","Type":"ContainerStarted","Data":"334712edf5809b8a522dcdeac989fe28a561f77118bd2fa3b96d3095d11e339b"} Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.801987 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5lwz6" event={"ID":"bc0c867e-0453-4770-889a-6d7c6ed361da","Type":"ContainerDied","Data":"0010985fc8a1412a64867c98c544820e05d3768140eade089ec4251dc58e3ad1"} Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.802022 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0010985fc8a1412a64867c98c544820e05d3768140eade089ec4251dc58e3ad1" Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.802083 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5lwz6" Jan 31 16:46:16 crc kubenswrapper[4730]: I0131 16:46:16.867561 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0c867e-0453-4770-889a-6d7c6ed361da-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.667712 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5lwz6"] Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.674207 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5lwz6"] Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.753326 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-qwdrx"] Jan 31 16:46:17 crc kubenswrapper[4730]: E0131 16:46:17.753656 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0c867e-0453-4770-889a-6d7c6ed361da" containerName="keystone-bootstrap" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.753671 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0c867e-0453-4770-889a-6d7c6ed361da" containerName="keystone-bootstrap" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.753889 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0c867e-0453-4770-889a-6d7c6ed361da" containerName="keystone-bootstrap" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.754428 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.756214 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-n4fjp" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.756578 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.756863 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.757192 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.758205 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.775261 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qwdrx"] Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.800063 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-combined-ca-bundle\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.800113 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8r9k\" (UniqueName: \"kubernetes.io/projected/60776ef1-a236-4e56-a837-ccb57d6474a9-kube-api-access-s8r9k\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.800177 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-config-data\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.800235 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-credential-keys\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.800255 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-scripts\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.800335 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-fernet-keys\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.902252 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-credential-keys\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.902294 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-scripts\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.902355 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-fernet-keys\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.902407 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-combined-ca-bundle\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.902428 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8r9k\" (UniqueName: \"kubernetes.io/projected/60776ef1-a236-4e56-a837-ccb57d6474a9-kube-api-access-s8r9k\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.902470 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-config-data\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.907006 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-fernet-keys\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.908206 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-scripts\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.908846 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-combined-ca-bundle\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.909682 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-credential-keys\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.911757 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-config-data\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:17 crc kubenswrapper[4730]: I0131 16:46:17.920977 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8r9k\" (UniqueName: \"kubernetes.io/projected/60776ef1-a236-4e56-a837-ccb57d6474a9-kube-api-access-s8r9k\") pod \"keystone-bootstrap-qwdrx\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:18 crc kubenswrapper[4730]: I0131 16:46:18.075383 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:18 crc kubenswrapper[4730]: I0131 16:46:18.479225 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc0c867e-0453-4770-889a-6d7c6ed361da" path="/var/lib/kubelet/pods/bc0c867e-0453-4770-889a-6d7c6ed361da/volumes" Jan 31 16:46:20 crc kubenswrapper[4730]: E0131 16:46:20.336751 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 31 16:46:20 crc kubenswrapper[4730]: E0131 16:46:20.337280 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwp7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-qpskq_openstack(f1243bfc-8196-4501-9b35-89e359501a00): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:46:20 crc kubenswrapper[4730]: E0131 16:46:20.338852 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-qpskq" podUID="f1243bfc-8196-4501-9b35-89e359501a00" Jan 31 16:46:20 crc kubenswrapper[4730]: E0131 16:46:20.351464 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 31 16:46:20 crc kubenswrapper[4730]: E0131 16:46:20.351861 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66fh9fh585hbhf7hc5h94h57bh67fh596h5fch696h57fh64hb6h695h669h7ch74h7chb7h7ch584h674h675h644h599hc7hdbhd9h8bh699q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmwjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-78dd7cd7dc-htltf_openstack(c12546ea-8841-46b2-abea-fd330847d69d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:46:20 crc kubenswrapper[4730]: E0131 16:46:20.354945 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-78dd7cd7dc-htltf" podUID="c12546ea-8841-46b2-abea-fd330847d69d" Jan 31 16:46:20 crc kubenswrapper[4730]: E0131 16:46:20.361328 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 31 16:46:20 crc kubenswrapper[4730]: E0131 16:46:20.361624 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5ddh678h9h666h677h648h68chc8h685h578hf9h8dh655h5fbh8bh7fh56fh595h67h568h5bbh659h8bh56ch7dh9ch578h679h8bh59bh99hbbq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l6xj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-67744bc4b5-tg4xw_openstack(f143d45a-857a-4114-99eb-e1880e44ffbe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:46:20 crc kubenswrapper[4730]: E0131 16:46:20.369269 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-67744bc4b5-tg4xw" podUID="f143d45a-857a-4114-99eb-e1880e44ffbe" Jan 31 16:46:20 crc kubenswrapper[4730]: E0131 16:46:20.836506 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-qpskq" podUID="f1243bfc-8196-4501-9b35-89e359501a00" Jan 31 16:46:22 crc kubenswrapper[4730]: I0131 16:46:22.857198 4730 generic.go:334] "Generic (PLEG): container finished" podID="7cf9dbf3-9160-439f-96d0-4437019ae012" containerID="895cab6b16eb7a353f8c1bee26fe81294ee5929f5fd129be54f1b3481abf3bd9" exitCode=0 Jan 31 16:46:22 crc kubenswrapper[4730]: I0131 16:46:22.857277 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rw222" event={"ID":"7cf9dbf3-9160-439f-96d0-4437019ae012","Type":"ContainerDied","Data":"895cab6b16eb7a353f8c1bee26fe81294ee5929f5fd129be54f1b3481abf3bd9"} Jan 31 16:46:24 crc kubenswrapper[4730]: I0131 16:46:24.830246 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7w5f2" podUID="24ce46a6-467c-4c82-9f68-900abb2601e1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 31 16:46:24 crc kubenswrapper[4730]: I0131 16:46:24.830776 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:46:29 crc kubenswrapper[4730]: I0131 16:46:29.210286 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 16:46:29 crc kubenswrapper[4730]: I0131 16:46:29.210922 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 16:46:29 crc kubenswrapper[4730]: I0131 16:46:29.833074 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7w5f2" podUID="24ce46a6-467c-4c82-9f68-900abb2601e1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 31 16:46:30 crc kubenswrapper[4730]: E0131 16:46:30.205372 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 31 16:46:30 crc kubenswrapper[4730]: E0131 16:46:30.205553 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x5mqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-wkj2z_openstack(2fd279f9-efa4-4fb3-a6e0-655de1c20403): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:46:30 crc kubenswrapper[4730]: E0131 16:46:30.207778 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-wkj2z" podUID="2fd279f9-efa4-4fb3-a6e0-655de1c20403" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.317075 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.335834 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.352691 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.364761 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.424153 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-dns-svc\") pod \"24ce46a6-467c-4c82-9f68-900abb2601e1\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.424246 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-scripts\") pod \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.424284 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.424375 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-config-data\") pod \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.424407 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-ovsdbserver-sb\") pod \"24ce46a6-467c-4c82-9f68-900abb2601e1\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.424433 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnwcr\" (UniqueName: \"kubernetes.io/projected/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-kube-api-access-jnwcr\") pod \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.424483 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-httpd-run\") pod \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.424562 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-combined-ca-bundle\") pod \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.424606 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-logs\") pod \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\" (UID: \"6ac73d0f-0df7-45b9-a18a-04af48d9ac91\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.424647 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-ovsdbserver-nb\") pod \"24ce46a6-467c-4c82-9f68-900abb2601e1\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.424677 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-config\") pod \"24ce46a6-467c-4c82-9f68-900abb2601e1\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.424702 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cldt\" (UniqueName: \"kubernetes.io/projected/24ce46a6-467c-4c82-9f68-900abb2601e1-kube-api-access-5cldt\") pod \"24ce46a6-467c-4c82-9f68-900abb2601e1\" (UID: \"24ce46a6-467c-4c82-9f68-900abb2601e1\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.425297 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6ac73d0f-0df7-45b9-a18a-04af48d9ac91" (UID: "6ac73d0f-0df7-45b9-a18a-04af48d9ac91"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.425317 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-logs" (OuterVolumeSpecName: "logs") pod "6ac73d0f-0df7-45b9-a18a-04af48d9ac91" (UID: "6ac73d0f-0df7-45b9-a18a-04af48d9ac91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.431340 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-scripts" (OuterVolumeSpecName: "scripts") pod "6ac73d0f-0df7-45b9-a18a-04af48d9ac91" (UID: "6ac73d0f-0df7-45b9-a18a-04af48d9ac91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.439456 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-kube-api-access-jnwcr" (OuterVolumeSpecName: "kube-api-access-jnwcr") pod "6ac73d0f-0df7-45b9-a18a-04af48d9ac91" (UID: "6ac73d0f-0df7-45b9-a18a-04af48d9ac91"). InnerVolumeSpecName "kube-api-access-jnwcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.449316 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "6ac73d0f-0df7-45b9-a18a-04af48d9ac91" (UID: "6ac73d0f-0df7-45b9-a18a-04af48d9ac91"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.449725 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ce46a6-467c-4c82-9f68-900abb2601e1-kube-api-access-5cldt" (OuterVolumeSpecName: "kube-api-access-5cldt") pod "24ce46a6-467c-4c82-9f68-900abb2601e1" (UID: "24ce46a6-467c-4c82-9f68-900abb2601e1"). InnerVolumeSpecName "kube-api-access-5cldt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.476160 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ac73d0f-0df7-45b9-a18a-04af48d9ac91" (UID: "6ac73d0f-0df7-45b9-a18a-04af48d9ac91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.489340 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24ce46a6-467c-4c82-9f68-900abb2601e1" (UID: "24ce46a6-467c-4c82-9f68-900abb2601e1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.510393 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-config-data" (OuterVolumeSpecName: "config-data") pod "6ac73d0f-0df7-45b9-a18a-04af48d9ac91" (UID: "6ac73d0f-0df7-45b9-a18a-04af48d9ac91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.519532 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "24ce46a6-467c-4c82-9f68-900abb2601e1" (UID: "24ce46a6-467c-4c82-9f68-900abb2601e1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.525656 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "24ce46a6-467c-4c82-9f68-900abb2601e1" (UID: "24ce46a6-467c-4c82-9f68-900abb2601e1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.526271 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c12546ea-8841-46b2-abea-fd330847d69d-logs\") pod \"c12546ea-8841-46b2-abea-fd330847d69d\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.526430 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f143d45a-857a-4114-99eb-e1880e44ffbe-logs\") pod \"f143d45a-857a-4114-99eb-e1880e44ffbe\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.526634 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f143d45a-857a-4114-99eb-e1880e44ffbe-config-data\") pod \"f143d45a-857a-4114-99eb-e1880e44ffbe\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.526735 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c12546ea-8841-46b2-abea-fd330847d69d-horizon-secret-key\") pod \"c12546ea-8841-46b2-abea-fd330847d69d\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.527039 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6xj7\" (UniqueName: \"kubernetes.io/projected/f143d45a-857a-4114-99eb-e1880e44ffbe-kube-api-access-l6xj7\") pod \"f143d45a-857a-4114-99eb-e1880e44ffbe\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.527172 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmwjk\" (UniqueName: \"kubernetes.io/projected/c12546ea-8841-46b2-abea-fd330847d69d-kube-api-access-qmwjk\") pod \"c12546ea-8841-46b2-abea-fd330847d69d\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.527527 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f143d45a-857a-4114-99eb-e1880e44ffbe-scripts\") pod \"f143d45a-857a-4114-99eb-e1880e44ffbe\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.527611 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f143d45a-857a-4114-99eb-e1880e44ffbe-horizon-secret-key\") pod \"f143d45a-857a-4114-99eb-e1880e44ffbe\" (UID: \"f143d45a-857a-4114-99eb-e1880e44ffbe\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.526537 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c12546ea-8841-46b2-abea-fd330847d69d-logs" (OuterVolumeSpecName: "logs") pod "c12546ea-8841-46b2-abea-fd330847d69d" (UID: "c12546ea-8841-46b2-abea-fd330847d69d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.526695 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f143d45a-857a-4114-99eb-e1880e44ffbe-logs" (OuterVolumeSpecName: "logs") pod "f143d45a-857a-4114-99eb-e1880e44ffbe" (UID: "f143d45a-857a-4114-99eb-e1880e44ffbe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.527086 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f143d45a-857a-4114-99eb-e1880e44ffbe-config-data" (OuterVolumeSpecName: "config-data") pod "f143d45a-857a-4114-99eb-e1880e44ffbe" (UID: "f143d45a-857a-4114-99eb-e1880e44ffbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.528155 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f143d45a-857a-4114-99eb-e1880e44ffbe-scripts" (OuterVolumeSpecName: "scripts") pod "f143d45a-857a-4114-99eb-e1880e44ffbe" (UID: "f143d45a-857a-4114-99eb-e1880e44ffbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.528264 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c12546ea-8841-46b2-abea-fd330847d69d-config-data\") pod \"c12546ea-8841-46b2-abea-fd330847d69d\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.528337 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c12546ea-8841-46b2-abea-fd330847d69d-scripts\") pod \"c12546ea-8841-46b2-abea-fd330847d69d\" (UID: \"c12546ea-8841-46b2-abea-fd330847d69d\") " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.528739 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.528877 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.528943 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.529017 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cldt\" (UniqueName: \"kubernetes.io/projected/24ce46a6-467c-4c82-9f68-900abb2601e1-kube-api-access-5cldt\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.529086 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.529146 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c12546ea-8841-46b2-abea-fd330847d69d-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.529199 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.529275 4730 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.530908 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f143d45a-857a-4114-99eb-e1880e44ffbe-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.530935 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f143d45a-857a-4114-99eb-e1880e44ffbe-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.530945 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.530956 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.530968 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnwcr\" (UniqueName: \"kubernetes.io/projected/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-kube-api-access-jnwcr\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.530978 4730 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ac73d0f-0df7-45b9-a18a-04af48d9ac91-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.530987 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f143d45a-857a-4114-99eb-e1880e44ffbe-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.529199 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c12546ea-8841-46b2-abea-fd330847d69d-scripts" (OuterVolumeSpecName: "scripts") pod "c12546ea-8841-46b2-abea-fd330847d69d" (UID: "c12546ea-8841-46b2-abea-fd330847d69d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.530139 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c12546ea-8841-46b2-abea-fd330847d69d-config-data" (OuterVolumeSpecName: "config-data") pod "c12546ea-8841-46b2-abea-fd330847d69d" (UID: "c12546ea-8841-46b2-abea-fd330847d69d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.531194 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c12546ea-8841-46b2-abea-fd330847d69d-kube-api-access-qmwjk" (OuterVolumeSpecName: "kube-api-access-qmwjk") pod "c12546ea-8841-46b2-abea-fd330847d69d" (UID: "c12546ea-8841-46b2-abea-fd330847d69d"). InnerVolumeSpecName "kube-api-access-qmwjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.531578 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f143d45a-857a-4114-99eb-e1880e44ffbe-kube-api-access-l6xj7" (OuterVolumeSpecName: "kube-api-access-l6xj7") pod "f143d45a-857a-4114-99eb-e1880e44ffbe" (UID: "f143d45a-857a-4114-99eb-e1880e44ffbe"). InnerVolumeSpecName "kube-api-access-l6xj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.531580 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c12546ea-8841-46b2-abea-fd330847d69d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c12546ea-8841-46b2-abea-fd330847d69d" (UID: "c12546ea-8841-46b2-abea-fd330847d69d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.532746 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f143d45a-857a-4114-99eb-e1880e44ffbe-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f143d45a-857a-4114-99eb-e1880e44ffbe" (UID: "f143d45a-857a-4114-99eb-e1880e44ffbe"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.541940 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-config" (OuterVolumeSpecName: "config") pod "24ce46a6-467c-4c82-9f68-900abb2601e1" (UID: "24ce46a6-467c-4c82-9f68-900abb2601e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.553815 4730 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.633031 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmwjk\" (UniqueName: \"kubernetes.io/projected/c12546ea-8841-46b2-abea-fd330847d69d-kube-api-access-qmwjk\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.633727 4730 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f143d45a-857a-4114-99eb-e1880e44ffbe-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.633854 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c12546ea-8841-46b2-abea-fd330847d69d-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.634164 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c12546ea-8841-46b2-abea-fd330847d69d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.634245 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ce46a6-467c-4c82-9f68-900abb2601e1-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.634319 4730 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.634386 4730 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c12546ea-8841-46b2-abea-fd330847d69d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.634459 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6xj7\" (UniqueName: \"kubernetes.io/projected/f143d45a-857a-4114-99eb-e1880e44ffbe-kube-api-access-l6xj7\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.976239 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7w5f2" event={"ID":"24ce46a6-467c-4c82-9f68-900abb2601e1","Type":"ContainerDied","Data":"49954cdf37bba7d47a6daec53b95edfec058a007239f63a9069c59409bf5621c"} Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.976570 4730 scope.go:117] "RemoveContainer" containerID="6261f1eb4a5de0d08c20c1d2d6ba279f9b66d002c903f34d066f4ece82535d1a" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.976714 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7w5f2" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.981946 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78dd7cd7dc-htltf" event={"ID":"c12546ea-8841-46b2-abea-fd330847d69d","Type":"ContainerDied","Data":"b58ea3e7831caa169c8a86e438c95683d963ca4b88f7cddda1824eed09e6cb0b"} Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.981980 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78dd7cd7dc-htltf" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.998839 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67744bc4b5-tg4xw" Jan 31 16:46:30 crc kubenswrapper[4730]: I0131 16:46:30.998872 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67744bc4b5-tg4xw" event={"ID":"f143d45a-857a-4114-99eb-e1880e44ffbe","Type":"ContainerDied","Data":"75a1b07196569ab4d3954ebcf2e5c5a329c85020103478824b430808a889e157"} Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.000967 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.000966 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ac73d0f-0df7-45b9-a18a-04af48d9ac91","Type":"ContainerDied","Data":"5cb830d41e207c1511c12d07ceecb5026c7767fde002a8bf06d669c47a7dd052"} Jan 31 16:46:31 crc kubenswrapper[4730]: E0131 16:46:31.002206 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-wkj2z" podUID="2fd279f9-efa4-4fb3-a6e0-655de1c20403" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.034097 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7w5f2"] Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.053014 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7w5f2"] Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.075078 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.093497 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.121036 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:46:31 crc kubenswrapper[4730]: E0131 16:46:31.121609 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ce46a6-467c-4c82-9f68-900abb2601e1" containerName="init" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.121671 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ce46a6-467c-4c82-9f68-900abb2601e1" containerName="init" Jan 31 16:46:31 crc kubenswrapper[4730]: E0131 16:46:31.121740 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac73d0f-0df7-45b9-a18a-04af48d9ac91" containerName="glance-httpd" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.121789 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac73d0f-0df7-45b9-a18a-04af48d9ac91" containerName="glance-httpd" Jan 31 16:46:31 crc kubenswrapper[4730]: E0131 16:46:31.121862 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac73d0f-0df7-45b9-a18a-04af48d9ac91" containerName="glance-log" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.121909 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac73d0f-0df7-45b9-a18a-04af48d9ac91" containerName="glance-log" Jan 31 16:46:31 crc kubenswrapper[4730]: E0131 16:46:31.121959 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ce46a6-467c-4c82-9f68-900abb2601e1" containerName="dnsmasq-dns" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.122005 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ce46a6-467c-4c82-9f68-900abb2601e1" containerName="dnsmasq-dns" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.122223 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ce46a6-467c-4c82-9f68-900abb2601e1" containerName="dnsmasq-dns" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.122287 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac73d0f-0df7-45b9-a18a-04af48d9ac91" containerName="glance-httpd" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.122341 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac73d0f-0df7-45b9-a18a-04af48d9ac91" containerName="glance-log" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.123328 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.126025 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.129195 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.169128 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.193954 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67744bc4b5-tg4xw"] Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.202440 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-67744bc4b5-tg4xw"] Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.216706 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78dd7cd7dc-htltf"] Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.256653 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-scripts\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.256734 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx87k\" (UniqueName: \"kubernetes.io/projected/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-kube-api-access-cx87k\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.256780 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.256862 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.256884 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.256902 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-config-data\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.257038 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-logs\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.257114 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.257646 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-78dd7cd7dc-htltf"] Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.359290 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-logs\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.359412 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.359619 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-scripts\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.359850 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx87k\" (UniqueName: \"kubernetes.io/projected/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-kube-api-access-cx87k\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.359893 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.359922 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.359947 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.359966 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-config-data\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.360004 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-logs\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.360471 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.360875 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.363979 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-scripts\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.366144 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.383701 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.383956 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-config-data\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.384431 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx87k\" (UniqueName: \"kubernetes.io/projected/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-kube-api-access-cx87k\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.398063 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " pod="openstack/glance-default-external-api-0" Jan 31 16:46:31 crc kubenswrapper[4730]: I0131 16:46:31.459760 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 16:46:32 crc kubenswrapper[4730]: E0131 16:46:32.113399 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 31 16:46:32 crc kubenswrapper[4730]: E0131 16:46:32.113546 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98jbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-xfklz_openstack(53655839-53b2-46cb-b859-fdb3376bc398): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 16:46:32 crc kubenswrapper[4730]: E0131 16:46:32.114705 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-xfklz" podUID="53655839-53b2-46cb-b859-fdb3376bc398" Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.197183 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rw222" Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.386051 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7cf9dbf3-9160-439f-96d0-4437019ae012-config\") pod \"7cf9dbf3-9160-439f-96d0-4437019ae012\" (UID: \"7cf9dbf3-9160-439f-96d0-4437019ae012\") " Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.386179 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc429\" (UniqueName: \"kubernetes.io/projected/7cf9dbf3-9160-439f-96d0-4437019ae012-kube-api-access-bc429\") pod \"7cf9dbf3-9160-439f-96d0-4437019ae012\" (UID: \"7cf9dbf3-9160-439f-96d0-4437019ae012\") " Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.386224 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf9dbf3-9160-439f-96d0-4437019ae012-combined-ca-bundle\") pod \"7cf9dbf3-9160-439f-96d0-4437019ae012\" (UID: \"7cf9dbf3-9160-439f-96d0-4437019ae012\") " Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.409915 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf9dbf3-9160-439f-96d0-4437019ae012-kube-api-access-bc429" (OuterVolumeSpecName: "kube-api-access-bc429") pod "7cf9dbf3-9160-439f-96d0-4437019ae012" (UID: "7cf9dbf3-9160-439f-96d0-4437019ae012"). InnerVolumeSpecName "kube-api-access-bc429". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.413008 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf9dbf3-9160-439f-96d0-4437019ae012-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cf9dbf3-9160-439f-96d0-4437019ae012" (UID: "7cf9dbf3-9160-439f-96d0-4437019ae012"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.414361 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf9dbf3-9160-439f-96d0-4437019ae012-config" (OuterVolumeSpecName: "config") pod "7cf9dbf3-9160-439f-96d0-4437019ae012" (UID: "7cf9dbf3-9160-439f-96d0-4437019ae012"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.488049 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc429\" (UniqueName: \"kubernetes.io/projected/7cf9dbf3-9160-439f-96d0-4437019ae012-kube-api-access-bc429\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.488646 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf9dbf3-9160-439f-96d0-4437019ae012-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.488700 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7cf9dbf3-9160-439f-96d0-4437019ae012-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.491002 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ce46a6-467c-4c82-9f68-900abb2601e1" path="/var/lib/kubelet/pods/24ce46a6-467c-4c82-9f68-900abb2601e1/volumes" Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.491644 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ac73d0f-0df7-45b9-a18a-04af48d9ac91" path="/var/lib/kubelet/pods/6ac73d0f-0df7-45b9-a18a-04af48d9ac91/volumes" Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.492564 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c12546ea-8841-46b2-abea-fd330847d69d" path="/var/lib/kubelet/pods/c12546ea-8841-46b2-abea-fd330847d69d/volumes" Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.493524 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f143d45a-857a-4114-99eb-e1880e44ffbe" path="/var/lib/kubelet/pods/f143d45a-857a-4114-99eb-e1880e44ffbe/volumes" Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.580240 4730 scope.go:117] "RemoveContainer" containerID="625233b49ca8e0677eb7065430535d91777f61177f12f12d64a9ed194843f04f" Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.633209 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b5bd455fb-h66br"] Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.725633 4730 scope.go:117] "RemoveContainer" containerID="988d4b3b3c83b0740047b7949c589603b13e9c345704cf2773c003f13f765598" Jan 31 16:46:32 crc kubenswrapper[4730]: I0131 16:46:32.860490 4730 scope.go:117] "RemoveContainer" containerID="19fb7ac2d691d8e0a4d3b8cb0915d0cef7b77d3bce1ba029cb3e8c3478e883e8" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.028608 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b5bd455fb-h66br" event={"ID":"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec","Type":"ContainerStarted","Data":"8e032e10479dda715828c80666b92089178a3b27ca2130404eb55c1f9d258d72"} Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.039378 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"a3b9aa96106c040897ae7759c8c7e37b4c35ba48dfb1207fcdec7d8f7b5bd348"} Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.041323 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rw222" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.041899 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rw222" event={"ID":"7cf9dbf3-9160-439f-96d0-4437019ae012","Type":"ContainerDied","Data":"d2956a9184bafb91af198d2d2f3b5b260ed714368c8b9f9f4cedd8d001b68b25"} Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.041919 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2956a9184bafb91af198d2d2f3b5b260ed714368c8b9f9f4cedd8d001b68b25" Jan 31 16:46:33 crc kubenswrapper[4730]: E0131 16:46:33.044423 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-xfklz" podUID="53655839-53b2-46cb-b859-fdb3376bc398" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.098507 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7788464654-cr95d"] Jan 31 16:46:33 crc kubenswrapper[4730]: W0131 16:46:33.114248 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0374cd2d_1d23_4f00_893a_278af887d99b.slice/crio-c80c1e0ae857b53b6beb89412069e734161c12d02db413e274fc426f078f2299 WatchSource:0}: Error finding container c80c1e0ae857b53b6beb89412069e734161c12d02db413e274fc426f078f2299: Status 404 returned error can't find the container with id c80c1e0ae857b53b6beb89412069e734161c12d02db413e274fc426f078f2299 Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.164159 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qwdrx"] Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.357651 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.537870 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fb745b69-4nwfb"] Jan 31 16:46:33 crc kubenswrapper[4730]: E0131 16:46:33.538551 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf9dbf3-9160-439f-96d0-4437019ae012" containerName="neutron-db-sync" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.538563 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf9dbf3-9160-439f-96d0-4437019ae012" containerName="neutron-db-sync" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.538745 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf9dbf3-9160-439f-96d0-4437019ae012" containerName="neutron-db-sync" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.539628 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.556751 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-4nwfb"] Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.633935 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-ovsdbserver-nb\") pod \"dnsmasq-dns-fb745b69-4nwfb\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.634032 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzzs2\" (UniqueName: \"kubernetes.io/projected/10c629d7-5578-4c73-bdd7-69b268cca700-kube-api-access-rzzs2\") pod \"dnsmasq-dns-fb745b69-4nwfb\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.634062 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-config\") pod \"dnsmasq-dns-fb745b69-4nwfb\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.634081 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-dns-svc\") pod \"dnsmasq-dns-fb745b69-4nwfb\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.634110 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-ovsdbserver-sb\") pod \"dnsmasq-dns-fb745b69-4nwfb\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.677398 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-777d75d768-bwvb5"] Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.681299 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.684641 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.686844 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-2bx94" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.687060 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.687318 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.695828 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-777d75d768-bwvb5"] Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.735981 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-ovsdbserver-nb\") pod \"dnsmasq-dns-fb745b69-4nwfb\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.736092 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzzs2\" (UniqueName: \"kubernetes.io/projected/10c629d7-5578-4c73-bdd7-69b268cca700-kube-api-access-rzzs2\") pod \"dnsmasq-dns-fb745b69-4nwfb\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.736133 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-config\") pod \"dnsmasq-dns-fb745b69-4nwfb\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.736150 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-dns-svc\") pod \"dnsmasq-dns-fb745b69-4nwfb\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.736182 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-ovsdbserver-sb\") pod \"dnsmasq-dns-fb745b69-4nwfb\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.737026 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-ovsdbserver-sb\") pod \"dnsmasq-dns-fb745b69-4nwfb\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.737506 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-ovsdbserver-nb\") pod \"dnsmasq-dns-fb745b69-4nwfb\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.737622 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-config\") pod \"dnsmasq-dns-fb745b69-4nwfb\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.738221 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-dns-svc\") pod \"dnsmasq-dns-fb745b69-4nwfb\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.757143 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzzs2\" (UniqueName: \"kubernetes.io/projected/10c629d7-5578-4c73-bdd7-69b268cca700-kube-api-access-rzzs2\") pod \"dnsmasq-dns-fb745b69-4nwfb\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.841423 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-httpd-config\") pod \"neutron-777d75d768-bwvb5\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.841473 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-combined-ca-bundle\") pod \"neutron-777d75d768-bwvb5\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.841545 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-ovndb-tls-certs\") pod \"neutron-777d75d768-bwvb5\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.841568 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbw98\" (UniqueName: \"kubernetes.io/projected/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-kube-api-access-xbw98\") pod \"neutron-777d75d768-bwvb5\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.841635 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-config\") pod \"neutron-777d75d768-bwvb5\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.925277 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.943515 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-ovndb-tls-certs\") pod \"neutron-777d75d768-bwvb5\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.943557 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbw98\" (UniqueName: \"kubernetes.io/projected/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-kube-api-access-xbw98\") pod \"neutron-777d75d768-bwvb5\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.943630 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-config\") pod \"neutron-777d75d768-bwvb5\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.943663 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-httpd-config\") pod \"neutron-777d75d768-bwvb5\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.943684 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-combined-ca-bundle\") pod \"neutron-777d75d768-bwvb5\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.977326 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-ovndb-tls-certs\") pod \"neutron-777d75d768-bwvb5\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.978019 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-config\") pod \"neutron-777d75d768-bwvb5\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.979108 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-httpd-config\") pod \"neutron-777d75d768-bwvb5\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.982564 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbw98\" (UniqueName: \"kubernetes.io/projected/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-kube-api-access-xbw98\") pod \"neutron-777d75d768-bwvb5\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:33 crc kubenswrapper[4730]: I0131 16:46:33.989025 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-combined-ca-bundle\") pod \"neutron-777d75d768-bwvb5\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.034232 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.104496 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b5bd455fb-h66br" event={"ID":"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec","Type":"ContainerStarted","Data":"5f76ea53478fba62d51bf2177248f8d97c1edacf725d569c9a1e0b691cca8300"} Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.104771 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b5bd455fb-h66br" event={"ID":"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec","Type":"ContainerStarted","Data":"31c3f1d338e9abdfe52a8ea48e754f02a316f206eec6752e7c454b2a52955b20"} Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.147709 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qwdrx" event={"ID":"60776ef1-a236-4e56-a837-ccb57d6474a9","Type":"ContainerStarted","Data":"1eda9ad6eb506fb6820f116265d9d58d5a39d69480873128b089af6f5d2c078f"} Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.147753 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qwdrx" event={"ID":"60776ef1-a236-4e56-a837-ccb57d6474a9","Type":"ContainerStarted","Data":"c2ee110fef5098200328f28eb3b16ae08c6541c651ec27a928657cc28ff224b1"} Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.148089 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-b5bd455fb-h66br" podStartSLOduration=28.148073905 podStartE2EDuration="28.148073905s" podCreationTimestamp="2026-01-31 16:46:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:46:34.144198068 +0000 UTC m=+980.950254984" watchObservedRunningTime="2026-01-31 16:46:34.148073905 +0000 UTC m=+980.954130821" Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.153249 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qpskq" event={"ID":"f1243bfc-8196-4501-9b35-89e359501a00","Type":"ContainerStarted","Data":"40be573340b10cb3c61e30fe8e2cf52895d46d55f706c6158a5680c75321aca9"} Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.160425 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0d3583d-f56f-4f4b-87cb-e748976d47f6","Type":"ContainerStarted","Data":"6dc2a926bdd3745332bd1dcaf7a8d96116f53c861159a769b35fe63153b415e7"} Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.198436 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-qwdrx" podStartSLOduration=17.198420719 podStartE2EDuration="17.198420719s" podCreationTimestamp="2026-01-31 16:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:46:34.189924554 +0000 UTC m=+980.995981470" watchObservedRunningTime="2026-01-31 16:46:34.198420719 +0000 UTC m=+981.004477635" Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.220632 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="a3b9aa96106c040897ae7759c8c7e37b4c35ba48dfb1207fcdec7d8f7b5bd348" exitCode=1 Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.220695 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"a3b9aa96106c040897ae7759c8c7e37b4c35ba48dfb1207fcdec7d8f7b5bd348"} Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.220720 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"1c717feb04948860ffe61e8e59ace1903fbec0985f999c6eca36640a682381f5"} Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.220734 4730 scope.go:117] "RemoveContainer" containerID="786f8582b1d464af042106b58dc4a961d37e50defef7db41bb247eaa82ebf765" Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.225960 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69df784bcc-98p6s" event={"ID":"00791e2a-6f2b-450d-acab-1ac4b91656ea","Type":"ContainerStarted","Data":"2dc6ce954598db57e2003a858ae5ba8949d40ef77652fb4e121d946900bfba08"} Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.237729 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-qpskq" podStartSLOduration=3.9400363560000002 podStartE2EDuration="40.237709497s" podCreationTimestamp="2026-01-31 16:45:54 +0000 UTC" firstStartedPulling="2026-01-31 16:45:56.397264582 +0000 UTC m=+943.203321498" lastFinishedPulling="2026-01-31 16:46:32.694937723 +0000 UTC m=+979.500994639" observedRunningTime="2026-01-31 16:46:34.23057001 +0000 UTC m=+981.036626916" watchObservedRunningTime="2026-01-31 16:46:34.237709497 +0000 UTC m=+981.043766413" Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.245088 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85700f98-5f9c-41da-9ef2-f5ff4aa785c6","Type":"ContainerStarted","Data":"f192d54e97801cfb1deb44cffaa445ae718419d1b97ab9ac21703041e3e0b798"} Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.247646 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bc03728a-57e1-497c-be93-b5a6dc008b28","Type":"ContainerStarted","Data":"ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd"} Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.259314 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7788464654-cr95d" event={"ID":"0374cd2d-1d23-4f00-893a-278af887d99b","Type":"ContainerStarted","Data":"c88a5d3caaf1abd69b9122a6b4eb04aff7a083c4da8bb0685a3c3c8b71791b14"} Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.259358 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7788464654-cr95d" event={"ID":"0374cd2d-1d23-4f00-893a-278af887d99b","Type":"ContainerStarted","Data":"c80c1e0ae857b53b6beb89412069e734161c12d02db413e274fc426f078f2299"} Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.735313 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-4nwfb"] Jan 31 16:46:34 crc kubenswrapper[4730]: I0131 16:46:34.833554 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7w5f2" podUID="24ce46a6-467c-4c82-9f68-900abb2601e1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 31 16:46:35 crc kubenswrapper[4730]: I0131 16:46:35.299879 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-4nwfb" event={"ID":"10c629d7-5578-4c73-bdd7-69b268cca700","Type":"ContainerStarted","Data":"d0ff38d8f8f8d17d662c70c2dd568c621a85694f75bbb8f49f8a57469a8f847f"} Jan 31 16:46:35 crc kubenswrapper[4730]: I0131 16:46:35.340434 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-777d75d768-bwvb5"] Jan 31 16:46:35 crc kubenswrapper[4730]: W0131 16:46:35.342924 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4a9c06b_a7ce_4f27_97d9_fafb4b70f1dd.slice/crio-a238526406f8be00165e602961e1a70f1bc9fcc4dce196769b32f5f419d3c375 WatchSource:0}: Error finding container a238526406f8be00165e602961e1a70f1bc9fcc4dce196769b32f5f419d3c375: Status 404 returned error can't find the container with id a238526406f8be00165e602961e1a70f1bc9fcc4dce196769b32f5f419d3c375 Jan 31 16:46:35 crc kubenswrapper[4730]: I0131 16:46:35.375292 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="1c717feb04948860ffe61e8e59ace1903fbec0985f999c6eca36640a682381f5" exitCode=1 Jan 31 16:46:35 crc kubenswrapper[4730]: I0131 16:46:35.375395 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"1c717feb04948860ffe61e8e59ace1903fbec0985f999c6eca36640a682381f5"} Jan 31 16:46:35 crc kubenswrapper[4730]: I0131 16:46:35.375474 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"4447520ba8817b50d0ba6b0a6b8a105c7d93b20b57b9de2f464b92326c6a1549"} Jan 31 16:46:35 crc kubenswrapper[4730]: I0131 16:46:35.375554 4730 scope.go:117] "RemoveContainer" containerID="2102570acd0d4063edb8ff73bbc2db62d76245e54759273f9b6b29e86aa93a9b" Jan 31 16:46:35 crc kubenswrapper[4730]: I0131 16:46:35.378738 4730 scope.go:117] "RemoveContainer" containerID="a3b9aa96106c040897ae7759c8c7e37b4c35ba48dfb1207fcdec7d8f7b5bd348" Jan 31 16:46:35 crc kubenswrapper[4730]: I0131 16:46:35.378873 4730 scope.go:117] "RemoveContainer" containerID="1c717feb04948860ffe61e8e59ace1903fbec0985f999c6eca36640a682381f5" Jan 31 16:46:35 crc kubenswrapper[4730]: E0131 16:46:35.379307 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:46:35 crc kubenswrapper[4730]: I0131 16:46:35.386501 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69df784bcc-98p6s" event={"ID":"00791e2a-6f2b-450d-acab-1ac4b91656ea","Type":"ContainerStarted","Data":"86a7791b7f970b4c2e27e68a5edfed6a34033cd7cd6bd79d79246be431e08272"} Jan 31 16:46:35 crc kubenswrapper[4730]: I0131 16:46:35.386680 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69df784bcc-98p6s" podUID="00791e2a-6f2b-450d-acab-1ac4b91656ea" containerName="horizon-log" containerID="cri-o://2dc6ce954598db57e2003a858ae5ba8949d40ef77652fb4e121d946900bfba08" gracePeriod=30 Jan 31 16:46:35 crc kubenswrapper[4730]: I0131 16:46:35.386959 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69df784bcc-98p6s" podUID="00791e2a-6f2b-450d-acab-1ac4b91656ea" containerName="horizon" containerID="cri-o://86a7791b7f970b4c2e27e68a5edfed6a34033cd7cd6bd79d79246be431e08272" gracePeriod=30 Jan 31 16:46:35 crc kubenswrapper[4730]: I0131 16:46:35.409005 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7788464654-cr95d" event={"ID":"0374cd2d-1d23-4f00-893a-278af887d99b","Type":"ContainerStarted","Data":"91e328665f0dfb9fb05ca0d20e6343eb8d7f25e993535ec02909c8c02411ff47"} Jan 31 16:46:35 crc kubenswrapper[4730]: I0131 16:46:35.475481 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7788464654-cr95d" podStartSLOduration=29.475465955 podStartE2EDuration="29.475465955s" podCreationTimestamp="2026-01-31 16:46:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:46:35.467245607 +0000 UTC m=+982.273302523" watchObservedRunningTime="2026-01-31 16:46:35.475465955 +0000 UTC m=+982.281522871" Jan 31 16:46:35 crc kubenswrapper[4730]: I0131 16:46:35.511219 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-69df784bcc-98p6s" podStartSLOduration=7.004621913 podStartE2EDuration="41.511199674s" podCreationTimestamp="2026-01-31 16:45:54 +0000 UTC" firstStartedPulling="2026-01-31 16:45:57.570327408 +0000 UTC m=+944.376384324" lastFinishedPulling="2026-01-31 16:46:32.076905169 +0000 UTC m=+978.882962085" observedRunningTime="2026-01-31 16:46:35.49301043 +0000 UTC m=+982.299067346" watchObservedRunningTime="2026-01-31 16:46:35.511199674 +0000 UTC m=+982.317256580" Jan 31 16:46:35 crc kubenswrapper[4730]: I0131 16:46:35.730824 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.421336 4730 generic.go:334] "Generic (PLEG): container finished" podID="10c629d7-5578-4c73-bdd7-69b268cca700" containerID="208bacecfa79d23c3a1006ba86f7bda4078ab6e4d5a66b18fb30aa1d7283f878" exitCode=0 Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.421562 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-4nwfb" event={"ID":"10c629d7-5578-4c73-bdd7-69b268cca700","Type":"ContainerDied","Data":"208bacecfa79d23c3a1006ba86f7bda4078ab6e4d5a66b18fb30aa1d7283f878"} Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.428630 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777d75d768-bwvb5" event={"ID":"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd","Type":"ContainerStarted","Data":"e3bcccfe0fe7eed1685979817bef5f406aaac6239f2b3340e387feb64826855c"} Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.428658 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777d75d768-bwvb5" event={"ID":"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd","Type":"ContainerStarted","Data":"658085b48ccbefdbb7e5d35f8a9b0841000c825df9371e9c652ec33fbfb2e4d8"} Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.428669 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777d75d768-bwvb5" event={"ID":"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd","Type":"ContainerStarted","Data":"a238526406f8be00165e602961e1a70f1bc9fcc4dce196769b32f5f419d3c375"} Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.429145 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.436618 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="4447520ba8817b50d0ba6b0a6b8a105c7d93b20b57b9de2f464b92326c6a1549" exitCode=1 Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.436659 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"4447520ba8817b50d0ba6b0a6b8a105c7d93b20b57b9de2f464b92326c6a1549"} Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.436680 4730 scope.go:117] "RemoveContainer" containerID="97a0f22ff3ede34052fb983fbbdc8c26473187f948f5a0bcbbcd93e6b7bb8326" Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.437082 4730 scope.go:117] "RemoveContainer" containerID="a3b9aa96106c040897ae7759c8c7e37b4c35ba48dfb1207fcdec7d8f7b5bd348" Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.437140 4730 scope.go:117] "RemoveContainer" containerID="1c717feb04948860ffe61e8e59ace1903fbec0985f999c6eca36640a682381f5" Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.437242 4730 scope.go:117] "RemoveContainer" containerID="4447520ba8817b50d0ba6b0a6b8a105c7d93b20b57b9de2f464b92326c6a1549" Jan 31 16:46:36 crc kubenswrapper[4730]: E0131 16:46:36.437713 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.455395 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85700f98-5f9c-41da-9ef2-f5ff4aa785c6","Type":"ContainerStarted","Data":"c5444d6899e0440c48bfec22fe8f2bbc1c926b665fa3b1fb25b180bd7965a983"} Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.489439 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bc03728a-57e1-497c-be93-b5a6dc008b28" containerName="glance-log" containerID="cri-o://ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd" gracePeriod=30 Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.490406 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bc03728a-57e1-497c-be93-b5a6dc008b28" containerName="glance-httpd" containerID="cri-o://81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827" gracePeriod=30 Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.508235 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-777d75d768-bwvb5" podStartSLOduration=3.508207343 podStartE2EDuration="3.508207343s" podCreationTimestamp="2026-01-31 16:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:46:36.464953085 +0000 UTC m=+983.271010001" watchObservedRunningTime="2026-01-31 16:46:36.508207343 +0000 UTC m=+983.314264259" Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.519554 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bc03728a-57e1-497c-be93-b5a6dc008b28","Type":"ContainerStarted","Data":"81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827"} Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.620121 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.620158 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7788464654-cr95d" Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.733857 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:36 crc kubenswrapper[4730]: I0131 16:46:36.734184 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.225916 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=35.225891198 podStartE2EDuration="35.225891198s" podCreationTimestamp="2026-01-31 16:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:46:36.596129098 +0000 UTC m=+983.402186014" watchObservedRunningTime="2026-01-31 16:46:37.225891198 +0000 UTC m=+984.031948134" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.234040 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6f7c76d449-mtwzd"] Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.235583 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.239244 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.239407 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.290537 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f7c76d449-mtwzd"] Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.339054 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.362359 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-public-tls-certs\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.362934 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xmgw\" (UniqueName: \"kubernetes.io/projected/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-kube-api-access-8xmgw\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.362973 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-internal-tls-certs\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.363007 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-ovndb-tls-certs\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.363248 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-httpd-config\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.363413 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-combined-ca-bundle\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.363599 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-config\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.510006 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-scripts\") pod \"bc03728a-57e1-497c-be93-b5a6dc008b28\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.512330 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-combined-ca-bundle\") pod \"bc03728a-57e1-497c-be93-b5a6dc008b28\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.513865 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"bc03728a-57e1-497c-be93-b5a6dc008b28\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.514026 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc03728a-57e1-497c-be93-b5a6dc008b28-logs\") pod \"bc03728a-57e1-497c-be93-b5a6dc008b28\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.523315 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc03728a-57e1-497c-be93-b5a6dc008b28-httpd-run\") pod \"bc03728a-57e1-497c-be93-b5a6dc008b28\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.523522 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-config-data\") pod \"bc03728a-57e1-497c-be93-b5a6dc008b28\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.524420 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm78j\" (UniqueName: \"kubernetes.io/projected/bc03728a-57e1-497c-be93-b5a6dc008b28-kube-api-access-sm78j\") pod \"bc03728a-57e1-497c-be93-b5a6dc008b28\" (UID: \"bc03728a-57e1-497c-be93-b5a6dc008b28\") " Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.525643 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-combined-ca-bundle\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.525842 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-config\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.526013 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-public-tls-certs\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.526264 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xmgw\" (UniqueName: \"kubernetes.io/projected/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-kube-api-access-8xmgw\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.526374 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-internal-tls-certs\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.526465 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-ovndb-tls-certs\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.526575 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-httpd-config\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.531134 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-scripts" (OuterVolumeSpecName: "scripts") pod "bc03728a-57e1-497c-be93-b5a6dc008b28" (UID: "bc03728a-57e1-497c-be93-b5a6dc008b28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.532190 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc03728a-57e1-497c-be93-b5a6dc008b28-logs" (OuterVolumeSpecName: "logs") pod "bc03728a-57e1-497c-be93-b5a6dc008b28" (UID: "bc03728a-57e1-497c-be93-b5a6dc008b28"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.533710 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc03728a-57e1-497c-be93-b5a6dc008b28-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bc03728a-57e1-497c-be93-b5a6dc008b28" (UID: "bc03728a-57e1-497c-be93-b5a6dc008b28"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.559463 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "bc03728a-57e1-497c-be93-b5a6dc008b28" (UID: "bc03728a-57e1-497c-be93-b5a6dc008b28"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.571620 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-httpd-config\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.573706 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-config\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.583113 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc03728a-57e1-497c-be93-b5a6dc008b28-kube-api-access-sm78j" (OuterVolumeSpecName: "kube-api-access-sm78j") pod "bc03728a-57e1-497c-be93-b5a6dc008b28" (UID: "bc03728a-57e1-497c-be93-b5a6dc008b28"). InnerVolumeSpecName "kube-api-access-sm78j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.590716 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-4nwfb" event={"ID":"10c629d7-5578-4c73-bdd7-69b268cca700","Type":"ContainerStarted","Data":"ce31c1f8e5f9a39995f032e00d2a30cd53f30b19d354a8b36caf1c07ff2b782b"} Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.591547 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.593411 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-ovndb-tls-certs\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.594736 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-combined-ca-bundle\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.596734 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-public-tls-certs\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.602521 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xmgw\" (UniqueName: \"kubernetes.io/projected/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-kube-api-access-8xmgw\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.604222 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-internal-tls-certs\") pod \"neutron-6f7c76d449-mtwzd\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.623554 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.627289 4730 scope.go:117] "RemoveContainer" containerID="a3b9aa96106c040897ae7759c8c7e37b4c35ba48dfb1207fcdec7d8f7b5bd348" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.627367 4730 scope.go:117] "RemoveContainer" containerID="1c717feb04948860ffe61e8e59ace1903fbec0985f999c6eca36640a682381f5" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.627450 4730 scope.go:117] "RemoveContainer" containerID="4447520ba8817b50d0ba6b0a6b8a105c7d93b20b57b9de2f464b92326c6a1549" Jan 31 16:46:37 crc kubenswrapper[4730]: E0131 16:46:37.627679 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.631042 4730 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.631074 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc03728a-57e1-497c-be93-b5a6dc008b28-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.631083 4730 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc03728a-57e1-497c-be93-b5a6dc008b28-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.631092 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm78j\" (UniqueName: \"kubernetes.io/projected/bc03728a-57e1-497c-be93-b5a6dc008b28-kube-api-access-sm78j\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.631101 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.647961 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc03728a-57e1-497c-be93-b5a6dc008b28" (UID: "bc03728a-57e1-497c-be93-b5a6dc008b28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.664537 4730 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.677103 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fb745b69-4nwfb" podStartSLOduration=4.677079293 podStartE2EDuration="4.677079293s" podCreationTimestamp="2026-01-31 16:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:46:37.645435776 +0000 UTC m=+984.451492702" watchObservedRunningTime="2026-01-31 16:46:37.677079293 +0000 UTC m=+984.483136209" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.684414 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85700f98-5f9c-41da-9ef2-f5ff4aa785c6","Type":"ContainerStarted","Data":"a8a7a8a0768c4834e6bb57b74dfa6519e4934cfb4ae53e8a56073cc3617fae52"} Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.696523 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-config-data" (OuterVolumeSpecName: "config-data") pod "bc03728a-57e1-497c-be93-b5a6dc008b28" (UID: "bc03728a-57e1-497c-be93-b5a6dc008b28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.707017 4730 generic.go:334] "Generic (PLEG): container finished" podID="bc03728a-57e1-497c-be93-b5a6dc008b28" containerID="81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827" exitCode=143 Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.707061 4730 generic.go:334] "Generic (PLEG): container finished" podID="bc03728a-57e1-497c-be93-b5a6dc008b28" containerID="ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd" exitCode=143 Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.708114 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.710673 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bc03728a-57e1-497c-be93-b5a6dc008b28","Type":"ContainerDied","Data":"81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827"} Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.710714 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bc03728a-57e1-497c-be93-b5a6dc008b28","Type":"ContainerDied","Data":"ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd"} Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.710725 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bc03728a-57e1-497c-be93-b5a6dc008b28","Type":"ContainerDied","Data":"334712edf5809b8a522dcdeac989fe28a561f77118bd2fa3b96d3095d11e339b"} Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.710740 4730 scope.go:117] "RemoveContainer" containerID="81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.737114 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.737093595 podStartE2EDuration="6.737093595s" podCreationTimestamp="2026-01-31 16:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:46:37.736160779 +0000 UTC m=+984.542217695" watchObservedRunningTime="2026-01-31 16:46:37.737093595 +0000 UTC m=+984.543150511" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.737705 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.737733 4730 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.737744 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc03728a-57e1-497c-be93-b5a6dc008b28-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.773684 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.783605 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.813697 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:46:37 crc kubenswrapper[4730]: E0131 16:46:37.814042 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc03728a-57e1-497c-be93-b5a6dc008b28" containerName="glance-log" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.814057 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc03728a-57e1-497c-be93-b5a6dc008b28" containerName="glance-log" Jan 31 16:46:37 crc kubenswrapper[4730]: E0131 16:46:37.814085 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc03728a-57e1-497c-be93-b5a6dc008b28" containerName="glance-httpd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.814091 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc03728a-57e1-497c-be93-b5a6dc008b28" containerName="glance-httpd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.814262 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc03728a-57e1-497c-be93-b5a6dc008b28" containerName="glance-httpd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.814285 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc03728a-57e1-497c-be93-b5a6dc008b28" containerName="glance-log" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.815110 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.824232 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.824380 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.864001 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.875641 4730 scope.go:117] "RemoveContainer" containerID="ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.919198 4730 scope.go:117] "RemoveContainer" containerID="81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827" Jan 31 16:46:37 crc kubenswrapper[4730]: E0131 16:46:37.923036 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827\": container with ID starting with 81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827 not found: ID does not exist" containerID="81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.923107 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827"} err="failed to get container status \"81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827\": rpc error: code = NotFound desc = could not find container \"81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827\": container with ID starting with 81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827 not found: ID does not exist" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.923135 4730 scope.go:117] "RemoveContainer" containerID="ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd" Jan 31 16:46:37 crc kubenswrapper[4730]: E0131 16:46:37.923890 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd\": container with ID starting with ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd not found: ID does not exist" containerID="ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.923915 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd"} err="failed to get container status \"ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd\": rpc error: code = NotFound desc = could not find container \"ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd\": container with ID starting with ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd not found: ID does not exist" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.923932 4730 scope.go:117] "RemoveContainer" containerID="81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.924236 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827"} err="failed to get container status \"81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827\": rpc error: code = NotFound desc = could not find container \"81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827\": container with ID starting with 81c1ff0c9bf9bad042cd53b011e17611af9cdbfaa017cec4bfadeaab81472827 not found: ID does not exist" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.924253 4730 scope.go:117] "RemoveContainer" containerID="ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.924511 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd"} err="failed to get container status \"ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd\": rpc error: code = NotFound desc = could not find container \"ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd\": container with ID starting with ddc2d752b75368da6df9b019c1cda86e2c3cbf55209bbc660a0b0c16bcbbcbdd not found: ID does not exist" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.949727 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv5v8\" (UniqueName: \"kubernetes.io/projected/9279482b-4a11-44db-9f64-2e396fd30ef3-kube-api-access-qv5v8\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.949819 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.949865 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.949888 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.949946 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9279482b-4a11-44db-9f64-2e396fd30ef3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.949988 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9279482b-4a11-44db-9f64-2e396fd30ef3-logs\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.950004 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:37 crc kubenswrapper[4730]: I0131 16:46:37.950022 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.066504 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9279482b-4a11-44db-9f64-2e396fd30ef3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.066888 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9279482b-4a11-44db-9f64-2e396fd30ef3-logs\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.066914 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.066937 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.066957 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv5v8\" (UniqueName: \"kubernetes.io/projected/9279482b-4a11-44db-9f64-2e396fd30ef3-kube-api-access-qv5v8\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.067004 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.067041 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.067066 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.068854 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.077659 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9279482b-4a11-44db-9f64-2e396fd30ef3-logs\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.077943 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9279482b-4a11-44db-9f64-2e396fd30ef3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.111989 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.122205 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.122661 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.125309 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.154557 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv5v8\" (UniqueName: \"kubernetes.io/projected/9279482b-4a11-44db-9f64-2e396fd30ef3-kube-api-access-qv5v8\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.172546 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: W0131 16:46:38.413943 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ddb310b_d8e7_4a4a_aac3_44298afdb0bf.slice/crio-cf76ca416dca00faec9f3f2a189324c21adad4e3255c78f98d1191dd1103add1 WatchSource:0}: Error finding container cf76ca416dca00faec9f3f2a189324c21adad4e3255c78f98d1191dd1103add1: Status 404 returned error can't find the container with id cf76ca416dca00faec9f3f2a189324c21adad4e3255c78f98d1191dd1103add1 Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.422050 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f7c76d449-mtwzd"] Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.438626 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.496990 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc03728a-57e1-497c-be93-b5a6dc008b28" path="/var/lib/kubelet/pods/bc03728a-57e1-497c-be93-b5a6dc008b28/volumes" Jan 31 16:46:38 crc kubenswrapper[4730]: I0131 16:46:38.719931 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f7c76d449-mtwzd" event={"ID":"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf","Type":"ContainerStarted","Data":"cf76ca416dca00faec9f3f2a189324c21adad4e3255c78f98d1191dd1103add1"} Jan 31 16:46:38 crc kubenswrapper[4730]: E0131 16:46:38.978628 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1243bfc_8196_4501_9b35_89e359501a00.slice/crio-40be573340b10cb3c61e30fe8e2cf52895d46d55f706c6158a5680c75321aca9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1243bfc_8196_4501_9b35_89e359501a00.slice/crio-conmon-40be573340b10cb3c61e30fe8e2cf52895d46d55f706c6158a5680c75321aca9.scope\": RecentStats: unable to find data in memory cache]" Jan 31 16:46:39 crc kubenswrapper[4730]: I0131 16:46:39.322713 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:46:39 crc kubenswrapper[4730]: I0131 16:46:39.754711 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9279482b-4a11-44db-9f64-2e396fd30ef3","Type":"ContainerStarted","Data":"bc663fd351805108d6a580b5e5a0c784a543800f7d1017de0f6e96a3eea050f1"} Jan 31 16:46:39 crc kubenswrapper[4730]: I0131 16:46:39.761429 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f7c76d449-mtwzd" event={"ID":"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf","Type":"ContainerStarted","Data":"233ceb1cdebc0314f0aa2c4b072811d20f666c035ea555f97792170c01fefd77"} Jan 31 16:46:39 crc kubenswrapper[4730]: I0131 16:46:39.761491 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f7c76d449-mtwzd" event={"ID":"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf","Type":"ContainerStarted","Data":"0fe8cc2ff85f09e05318581d4516d9956824f119e043d68a882c1f60cf68181d"} Jan 31 16:46:39 crc kubenswrapper[4730]: I0131 16:46:39.761821 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:46:39 crc kubenswrapper[4730]: I0131 16:46:39.763458 4730 generic.go:334] "Generic (PLEG): container finished" podID="f1243bfc-8196-4501-9b35-89e359501a00" containerID="40be573340b10cb3c61e30fe8e2cf52895d46d55f706c6158a5680c75321aca9" exitCode=0 Jan 31 16:46:39 crc kubenswrapper[4730]: I0131 16:46:39.763487 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qpskq" event={"ID":"f1243bfc-8196-4501-9b35-89e359501a00","Type":"ContainerDied","Data":"40be573340b10cb3c61e30fe8e2cf52895d46d55f706c6158a5680c75321aca9"} Jan 31 16:46:39 crc kubenswrapper[4730]: I0131 16:46:39.802720 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6f7c76d449-mtwzd" podStartSLOduration=2.802705117 podStartE2EDuration="2.802705117s" podCreationTimestamp="2026-01-31 16:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:46:39.802317126 +0000 UTC m=+986.608374042" watchObservedRunningTime="2026-01-31 16:46:39.802705117 +0000 UTC m=+986.608762033" Jan 31 16:46:40 crc kubenswrapper[4730]: I0131 16:46:40.781155 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9279482b-4a11-44db-9f64-2e396fd30ef3","Type":"ContainerStarted","Data":"07e1be1191d648c56d86b073aac3657992a636c4ebeeda7d77fc6ffe4e4ad296"} Jan 31 16:46:41 crc kubenswrapper[4730]: I0131 16:46:41.460397 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 16:46:41 crc kubenswrapper[4730]: I0131 16:46:41.460734 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 16:46:41 crc kubenswrapper[4730]: I0131 16:46:41.508772 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 16:46:41 crc kubenswrapper[4730]: I0131 16:46:41.539790 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 16:46:41 crc kubenswrapper[4730]: I0131 16:46:41.789915 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 16:46:41 crc kubenswrapper[4730]: I0131 16:46:41.790186 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 16:46:42 crc kubenswrapper[4730]: I0131 16:46:42.802085 4730 generic.go:334] "Generic (PLEG): container finished" podID="60776ef1-a236-4e56-a837-ccb57d6474a9" containerID="1eda9ad6eb506fb6820f116265d9d58d5a39d69480873128b089af6f5d2c078f" exitCode=0 Jan 31 16:46:42 crc kubenswrapper[4730]: I0131 16:46:42.802789 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qwdrx" event={"ID":"60776ef1-a236-4e56-a837-ccb57d6474a9","Type":"ContainerDied","Data":"1eda9ad6eb506fb6820f116265d9d58d5a39d69480873128b089af6f5d2c078f"} Jan 31 16:46:43 crc kubenswrapper[4730]: I0131 16:46:43.822981 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 16:46:43 crc kubenswrapper[4730]: I0131 16:46:43.823298 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 16:46:43 crc kubenswrapper[4730]: I0131 16:46:43.927955 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:46:43 crc kubenswrapper[4730]: I0131 16:46:43.998079 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-p65js"] Jan 31 16:46:43 crc kubenswrapper[4730]: I0131 16:46:43.998552 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84976bdf-p65js" podUID="fda90ccb-0cf0-45d3-88fd-c795848c9482" containerName="dnsmasq-dns" containerID="cri-o://d8f42234aab662bc2c7f5c48364061d157e6e950051ecf5f3eb127abe97c74d9" gracePeriod=10 Jan 31 16:46:44 crc kubenswrapper[4730]: I0131 16:46:44.833215 4730 generic.go:334] "Generic (PLEG): container finished" podID="fda90ccb-0cf0-45d3-88fd-c795848c9482" containerID="d8f42234aab662bc2c7f5c48364061d157e6e950051ecf5f3eb127abe97c74d9" exitCode=0 Jan 31 16:46:44 crc kubenswrapper[4730]: I0131 16:46:44.833312 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-p65js" event={"ID":"fda90ccb-0cf0-45d3-88fd-c795848c9482","Type":"ContainerDied","Data":"d8f42234aab662bc2c7f5c48364061d157e6e950051ecf5f3eb127abe97c74d9"} Jan 31 16:46:45 crc kubenswrapper[4730]: I0131 16:46:45.441575 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-f84976bdf-p65js" podUID="fda90ccb-0cf0-45d3-88fd-c795848c9482" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: connect: connection refused" Jan 31 16:46:46 crc kubenswrapper[4730]: I0131 16:46:46.343315 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 16:46:46 crc kubenswrapper[4730]: I0131 16:46:46.343430 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 16:46:46 crc kubenswrapper[4730]: I0131 16:46:46.622006 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7788464654-cr95d" podUID="0374cd2d-1d23-4f00-893a-278af887d99b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 31 16:46:46 crc kubenswrapper[4730]: I0131 16:46:46.721231 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 16:46:46 crc kubenswrapper[4730]: I0131 16:46:46.734893 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-b5bd455fb-h66br" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.356482 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.365644 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qpskq" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.512786 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-config-data\") pod \"60776ef1-a236-4e56-a837-ccb57d6474a9\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.513100 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-fernet-keys\") pod \"60776ef1-a236-4e56-a837-ccb57d6474a9\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.513173 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwp7v\" (UniqueName: \"kubernetes.io/projected/f1243bfc-8196-4501-9b35-89e359501a00-kube-api-access-wwp7v\") pod \"f1243bfc-8196-4501-9b35-89e359501a00\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.513213 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-scripts\") pod \"f1243bfc-8196-4501-9b35-89e359501a00\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.513960 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-combined-ca-bundle\") pod \"f1243bfc-8196-4501-9b35-89e359501a00\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.514002 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-config-data\") pod \"f1243bfc-8196-4501-9b35-89e359501a00\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.514056 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-credential-keys\") pod \"60776ef1-a236-4e56-a837-ccb57d6474a9\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.514086 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-scripts\") pod \"60776ef1-a236-4e56-a837-ccb57d6474a9\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.514160 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-combined-ca-bundle\") pod \"60776ef1-a236-4e56-a837-ccb57d6474a9\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.514189 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8r9k\" (UniqueName: \"kubernetes.io/projected/60776ef1-a236-4e56-a837-ccb57d6474a9-kube-api-access-s8r9k\") pod \"60776ef1-a236-4e56-a837-ccb57d6474a9\" (UID: \"60776ef1-a236-4e56-a837-ccb57d6474a9\") " Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.514236 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1243bfc-8196-4501-9b35-89e359501a00-logs\") pod \"f1243bfc-8196-4501-9b35-89e359501a00\" (UID: \"f1243bfc-8196-4501-9b35-89e359501a00\") " Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.519204 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-scripts" (OuterVolumeSpecName: "scripts") pod "f1243bfc-8196-4501-9b35-89e359501a00" (UID: "f1243bfc-8196-4501-9b35-89e359501a00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.520342 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1243bfc-8196-4501-9b35-89e359501a00-kube-api-access-wwp7v" (OuterVolumeSpecName: "kube-api-access-wwp7v") pod "f1243bfc-8196-4501-9b35-89e359501a00" (UID: "f1243bfc-8196-4501-9b35-89e359501a00"). InnerVolumeSpecName "kube-api-access-wwp7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.523340 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1243bfc-8196-4501-9b35-89e359501a00-logs" (OuterVolumeSpecName: "logs") pod "f1243bfc-8196-4501-9b35-89e359501a00" (UID: "f1243bfc-8196-4501-9b35-89e359501a00"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.543971 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-scripts" (OuterVolumeSpecName: "scripts") pod "60776ef1-a236-4e56-a837-ccb57d6474a9" (UID: "60776ef1-a236-4e56-a837-ccb57d6474a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.557171 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "60776ef1-a236-4e56-a837-ccb57d6474a9" (UID: "60776ef1-a236-4e56-a837-ccb57d6474a9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.559017 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-config-data" (OuterVolumeSpecName: "config-data") pod "60776ef1-a236-4e56-a837-ccb57d6474a9" (UID: "60776ef1-a236-4e56-a837-ccb57d6474a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.575813 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60776ef1-a236-4e56-a837-ccb57d6474a9" (UID: "60776ef1-a236-4e56-a837-ccb57d6474a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.576914 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60776ef1-a236-4e56-a837-ccb57d6474a9-kube-api-access-s8r9k" (OuterVolumeSpecName: "kube-api-access-s8r9k") pod "60776ef1-a236-4e56-a837-ccb57d6474a9" (UID: "60776ef1-a236-4e56-a837-ccb57d6474a9"). InnerVolumeSpecName "kube-api-access-s8r9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.578249 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "60776ef1-a236-4e56-a837-ccb57d6474a9" (UID: "60776ef1-a236-4e56-a837-ccb57d6474a9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.607637 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-config-data" (OuterVolumeSpecName: "config-data") pod "f1243bfc-8196-4501-9b35-89e359501a00" (UID: "f1243bfc-8196-4501-9b35-89e359501a00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.617906 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.617929 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8r9k\" (UniqueName: \"kubernetes.io/projected/60776ef1-a236-4e56-a837-ccb57d6474a9-kube-api-access-s8r9k\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.617940 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1243bfc-8196-4501-9b35-89e359501a00-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.617949 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.617957 4730 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.617964 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwp7v\" (UniqueName: \"kubernetes.io/projected/f1243bfc-8196-4501-9b35-89e359501a00-kube-api-access-wwp7v\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.617972 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.617979 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.617986 4730 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.617994 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60776ef1-a236-4e56-a837-ccb57d6474a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.655094 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1243bfc-8196-4501-9b35-89e359501a00" (UID: "f1243bfc-8196-4501-9b35-89e359501a00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.657758 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.718764 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-ovsdbserver-nb\") pod \"fda90ccb-0cf0-45d3-88fd-c795848c9482\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.718898 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-ovsdbserver-sb\") pod \"fda90ccb-0cf0-45d3-88fd-c795848c9482\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.718968 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-config\") pod \"fda90ccb-0cf0-45d3-88fd-c795848c9482\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.718990 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdqcx\" (UniqueName: \"kubernetes.io/projected/fda90ccb-0cf0-45d3-88fd-c795848c9482-kube-api-access-hdqcx\") pod \"fda90ccb-0cf0-45d3-88fd-c795848c9482\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.719037 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-dns-svc\") pod \"fda90ccb-0cf0-45d3-88fd-c795848c9482\" (UID: \"fda90ccb-0cf0-45d3-88fd-c795848c9482\") " Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.719379 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1243bfc-8196-4501-9b35-89e359501a00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.727168 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda90ccb-0cf0-45d3-88fd-c795848c9482-kube-api-access-hdqcx" (OuterVolumeSpecName: "kube-api-access-hdqcx") pod "fda90ccb-0cf0-45d3-88fd-c795848c9482" (UID: "fda90ccb-0cf0-45d3-88fd-c795848c9482"). InnerVolumeSpecName "kube-api-access-hdqcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.793359 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fda90ccb-0cf0-45d3-88fd-c795848c9482" (UID: "fda90ccb-0cf0-45d3-88fd-c795848c9482"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.821426 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdqcx\" (UniqueName: \"kubernetes.io/projected/fda90ccb-0cf0-45d3-88fd-c795848c9482-kube-api-access-hdqcx\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.821456 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.853260 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fda90ccb-0cf0-45d3-88fd-c795848c9482" (UID: "fda90ccb-0cf0-45d3-88fd-c795848c9482"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.867855 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-config" (OuterVolumeSpecName: "config") pod "fda90ccb-0cf0-45d3-88fd-c795848c9482" (UID: "fda90ccb-0cf0-45d3-88fd-c795848c9482"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.875589 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fda90ccb-0cf0-45d3-88fd-c795848c9482" (UID: "fda90ccb-0cf0-45d3-88fd-c795848c9482"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.876621 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qwdrx" event={"ID":"60776ef1-a236-4e56-a837-ccb57d6474a9","Type":"ContainerDied","Data":"c2ee110fef5098200328f28eb3b16ae08c6541c651ec27a928657cc28ff224b1"} Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.876655 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ee110fef5098200328f28eb3b16ae08c6541c651ec27a928657cc28ff224b1" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.876710 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qwdrx" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.880366 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qpskq" event={"ID":"f1243bfc-8196-4501-9b35-89e359501a00","Type":"ContainerDied","Data":"208633f4a6198468e989011d6d5db4d3af1ff561f21eccd315f154682adc436d"} Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.880398 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="208633f4a6198468e989011d6d5db4d3af1ff561f21eccd315f154682adc436d" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.880452 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qpskq" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.897931 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-p65js" event={"ID":"fda90ccb-0cf0-45d3-88fd-c795848c9482","Type":"ContainerDied","Data":"7ab3cb07e6a4b20cd907d9355b4f8d5da0969458d02e81651d823d3159814309"} Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.898058 4730 scope.go:117] "RemoveContainer" containerID="d8f42234aab662bc2c7f5c48364061d157e6e950051ecf5f3eb127abe97c74d9" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.898179 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-p65js" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.925004 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.925032 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.925041 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fda90ccb-0cf0-45d3-88fd-c795848c9482-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.935657 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-p65js"] Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.948287 4730 scope.go:117] "RemoveContainer" containerID="ef85d07507057f4928024ac6405d8a3ac1edabde879bdf943b3e55ec917c9548" Jan 31 16:46:47 crc kubenswrapper[4730]: I0131 16:46:47.950846 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-p65js"] Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.483585 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda90ccb-0cf0-45d3-88fd-c795848c9482" path="/var/lib/kubelet/pods/fda90ccb-0cf0-45d3-88fd-c795848c9482/volumes" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.484298 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5b54468f66-vfdd4"] Jan 31 16:46:48 crc kubenswrapper[4730]: E0131 16:46:48.485033 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda90ccb-0cf0-45d3-88fd-c795848c9482" containerName="init" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.485046 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda90ccb-0cf0-45d3-88fd-c795848c9482" containerName="init" Jan 31 16:46:48 crc kubenswrapper[4730]: E0131 16:46:48.485098 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60776ef1-a236-4e56-a837-ccb57d6474a9" containerName="keystone-bootstrap" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.485105 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="60776ef1-a236-4e56-a837-ccb57d6474a9" containerName="keystone-bootstrap" Jan 31 16:46:48 crc kubenswrapper[4730]: E0131 16:46:48.485133 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1243bfc-8196-4501-9b35-89e359501a00" containerName="placement-db-sync" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.485140 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1243bfc-8196-4501-9b35-89e359501a00" containerName="placement-db-sync" Jan 31 16:46:48 crc kubenswrapper[4730]: E0131 16:46:48.485161 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda90ccb-0cf0-45d3-88fd-c795848c9482" containerName="dnsmasq-dns" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.485167 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda90ccb-0cf0-45d3-88fd-c795848c9482" containerName="dnsmasq-dns" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.486590 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda90ccb-0cf0-45d3-88fd-c795848c9482" containerName="dnsmasq-dns" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.486646 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="60776ef1-a236-4e56-a837-ccb57d6474a9" containerName="keystone-bootstrap" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.486684 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1243bfc-8196-4501-9b35-89e359501a00" containerName="placement-db-sync" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.488536 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.501036 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.501685 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.501988 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-n4fjp" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.502359 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.502597 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.502757 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.516210 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b54468f66-vfdd4"] Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.637601 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-config-data\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.638004 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-credential-keys\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.638061 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-scripts\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.638116 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6h6g\" (UniqueName: \"kubernetes.io/projected/54eaed65-bddf-4e89-be4e-54386d1a6768-kube-api-access-h6h6g\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.638150 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-fernet-keys\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.638202 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-internal-tls-certs\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.638248 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-combined-ca-bundle\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.638339 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-public-tls-certs\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.675530 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-959768976-4n77c"] Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.676826 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.682225 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.682446 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.682565 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.682667 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-5dw9r" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.689895 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.702898 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-959768976-4n77c"] Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.741680 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-internal-tls-certs\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.741733 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m85q6\" (UniqueName: \"kubernetes.io/projected/cae13f89-c09f-4e59-b3e5-7de6b4562d17-kube-api-access-m85q6\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.741760 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-combined-ca-bundle\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.741786 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-config-data\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.741826 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cae13f89-c09f-4e59-b3e5-7de6b4562d17-logs\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.741852 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-public-tls-certs\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.741923 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-config-data\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.741946 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-internal-tls-certs\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.741968 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-combined-ca-bundle\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.741991 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-credential-keys\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.742007 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-scripts\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.742029 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-scripts\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.742049 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6h6g\" (UniqueName: \"kubernetes.io/projected/54eaed65-bddf-4e89-be4e-54386d1a6768-kube-api-access-h6h6g\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.742069 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-fernet-keys\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.742088 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-public-tls-certs\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.758013 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-public-tls-certs\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.758063 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-fernet-keys\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.758239 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-combined-ca-bundle\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.758414 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-credential-keys\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.758616 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-scripts\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.760418 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-internal-tls-certs\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.770534 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54eaed65-bddf-4e89-be4e-54386d1a6768-config-data\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.785412 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6h6g\" (UniqueName: \"kubernetes.io/projected/54eaed65-bddf-4e89-be4e-54386d1a6768-kube-api-access-h6h6g\") pod \"keystone-5b54468f66-vfdd4\" (UID: \"54eaed65-bddf-4e89-be4e-54386d1a6768\") " pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.826173 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.843713 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-internal-tls-certs\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.843762 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-combined-ca-bundle\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.843792 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-scripts\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.843837 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-public-tls-certs\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.843861 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m85q6\" (UniqueName: \"kubernetes.io/projected/cae13f89-c09f-4e59-b3e5-7de6b4562d17-kube-api-access-m85q6\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.843887 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-config-data\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.843912 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cae13f89-c09f-4e59-b3e5-7de6b4562d17-logs\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.844300 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cae13f89-c09f-4e59-b3e5-7de6b4562d17-logs\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.848812 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-scripts\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.855402 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-public-tls-certs\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.855611 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-config-data\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.855967 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-internal-tls-certs\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.868903 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-combined-ca-bundle\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.890704 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m85q6\" (UniqueName: \"kubernetes.io/projected/cae13f89-c09f-4e59-b3e5-7de6b4562d17-kube-api-access-m85q6\") pod \"placement-959768976-4n77c\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " pod="openstack/placement-959768976-4n77c" Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.959985 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wkj2z" event={"ID":"2fd279f9-efa4-4fb3-a6e0-655de1c20403","Type":"ContainerStarted","Data":"d98c34e03192a3f9bd62a9607de7d72a09e66464a566381ee903f28e2cd9c66e"} Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.983969 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0d3583d-f56f-4f4b-87cb-e748976d47f6","Type":"ContainerStarted","Data":"b9f9052e94db5bc04e100f5fe656802b63b022bfdcfb7ba3c44aa0250a6d30b9"} Jan 31 16:46:48 crc kubenswrapper[4730]: I0131 16:46:48.996922 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9279482b-4a11-44db-9f64-2e396fd30ef3","Type":"ContainerStarted","Data":"cf9109405b3aad8bfcf763da4f591a3060702b8e8a95539722799027cd60c7ea"} Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.004039 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-959768976-4n77c" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.079206 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=12.079191488 podStartE2EDuration="12.079191488s" podCreationTimestamp="2026-01-31 16:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:46:49.079130546 +0000 UTC m=+995.885187462" watchObservedRunningTime="2026-01-31 16:46:49.079191488 +0000 UTC m=+995.885248404" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.099269 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-wkj2z" podStartSLOduration=4.734712061 podStartE2EDuration="55.099248393s" podCreationTimestamp="2026-01-31 16:45:54 +0000 UTC" firstStartedPulling="2026-01-31 16:45:57.158947374 +0000 UTC m=+943.965004280" lastFinishedPulling="2026-01-31 16:46:47.523483696 +0000 UTC m=+994.329540612" observedRunningTime="2026-01-31 16:46:48.994155713 +0000 UTC m=+995.800212629" watchObservedRunningTime="2026-01-31 16:46:49.099248393 +0000 UTC m=+995.905305309" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.138506 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6b6cc64d78-7m9cj"] Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.140102 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.152439 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b6cc64d78-7m9cj"] Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.273740 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e510754-1362-4ae1-9934-59a43324b2bf-logs\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.274069 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbq8j\" (UniqueName: \"kubernetes.io/projected/3e510754-1362-4ae1-9934-59a43324b2bf-kube-api-access-sbq8j\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.274119 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e510754-1362-4ae1-9934-59a43324b2bf-combined-ca-bundle\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.274134 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e510754-1362-4ae1-9934-59a43324b2bf-internal-tls-certs\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.274173 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e510754-1362-4ae1-9934-59a43324b2bf-public-tls-certs\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.274234 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e510754-1362-4ae1-9934-59a43324b2bf-config-data\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.274263 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e510754-1362-4ae1-9934-59a43324b2bf-scripts\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.353562 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b54468f66-vfdd4"] Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.375835 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e510754-1362-4ae1-9934-59a43324b2bf-logs\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.375912 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbq8j\" (UniqueName: \"kubernetes.io/projected/3e510754-1362-4ae1-9934-59a43324b2bf-kube-api-access-sbq8j\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.375960 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e510754-1362-4ae1-9934-59a43324b2bf-combined-ca-bundle\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.375976 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e510754-1362-4ae1-9934-59a43324b2bf-internal-tls-certs\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.375995 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e510754-1362-4ae1-9934-59a43324b2bf-public-tls-certs\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.376050 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e510754-1362-4ae1-9934-59a43324b2bf-config-data\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.376082 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e510754-1362-4ae1-9934-59a43324b2bf-scripts\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.376290 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e510754-1362-4ae1-9934-59a43324b2bf-logs\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.392956 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e510754-1362-4ae1-9934-59a43324b2bf-internal-tls-certs\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.395712 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e510754-1362-4ae1-9934-59a43324b2bf-scripts\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.402278 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e510754-1362-4ae1-9934-59a43324b2bf-public-tls-certs\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.408263 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e510754-1362-4ae1-9934-59a43324b2bf-combined-ca-bundle\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.420300 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e510754-1362-4ae1-9934-59a43324b2bf-config-data\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.436248 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbq8j\" (UniqueName: \"kubernetes.io/projected/3e510754-1362-4ae1-9934-59a43324b2bf-kube-api-access-sbq8j\") pod \"placement-6b6cc64d78-7m9cj\" (UID: \"3e510754-1362-4ae1-9934-59a43324b2bf\") " pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: W0131 16:46:49.457303 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54eaed65_bddf_4e89_be4e_54386d1a6768.slice/crio-299130618ddb3736dbcfb7a75be3fa39915b3d3e630fdc0194c96d831fce06e3 WatchSource:0}: Error finding container 299130618ddb3736dbcfb7a75be3fa39915b3d3e630fdc0194c96d831fce06e3: Status 404 returned error can't find the container with id 299130618ddb3736dbcfb7a75be3fa39915b3d3e630fdc0194c96d831fce06e3 Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.491706 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:49 crc kubenswrapper[4730]: I0131 16:46:49.881989 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-959768976-4n77c"] Jan 31 16:46:50 crc kubenswrapper[4730]: I0131 16:46:50.009415 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-959768976-4n77c" event={"ID":"cae13f89-c09f-4e59-b3e5-7de6b4562d17","Type":"ContainerStarted","Data":"d272c96d271d7ba661b0cef74dd45c51771bcb418e4c59385569e5a8a9662d78"} Jan 31 16:46:50 crc kubenswrapper[4730]: I0131 16:46:50.011512 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b54468f66-vfdd4" event={"ID":"54eaed65-bddf-4e89-be4e-54386d1a6768","Type":"ContainerStarted","Data":"cc02729720ec846441c278bf9b5601c87a8f2ad74fd430dcaae7298e1c9fae2f"} Jan 31 16:46:50 crc kubenswrapper[4730]: I0131 16:46:50.011538 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b54468f66-vfdd4" event={"ID":"54eaed65-bddf-4e89-be4e-54386d1a6768","Type":"ContainerStarted","Data":"299130618ddb3736dbcfb7a75be3fa39915b3d3e630fdc0194c96d831fce06e3"} Jan 31 16:46:50 crc kubenswrapper[4730]: I0131 16:46:50.011624 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:46:50 crc kubenswrapper[4730]: I0131 16:46:50.019958 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xfklz" event={"ID":"53655839-53b2-46cb-b859-fdb3376bc398","Type":"ContainerStarted","Data":"8aca09008a0d1c8b61f105f17f9581ec956efa657ae788587ccb0e38e29e1a05"} Jan 31 16:46:50 crc kubenswrapper[4730]: I0131 16:46:50.044473 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5b54468f66-vfdd4" podStartSLOduration=2.044454819 podStartE2EDuration="2.044454819s" podCreationTimestamp="2026-01-31 16:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:46:50.030611165 +0000 UTC m=+996.836668081" watchObservedRunningTime="2026-01-31 16:46:50.044454819 +0000 UTC m=+996.850511735" Jan 31 16:46:50 crc kubenswrapper[4730]: I0131 16:46:50.072642 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b6cc64d78-7m9cj"] Jan 31 16:46:50 crc kubenswrapper[4730]: I0131 16:46:50.073263 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-xfklz" podStartSLOduration=4.918914057 podStartE2EDuration="56.073252296s" podCreationTimestamp="2026-01-31 16:45:54 +0000 UTC" firstStartedPulling="2026-01-31 16:45:56.395891877 +0000 UTC m=+943.201948793" lastFinishedPulling="2026-01-31 16:46:47.550230116 +0000 UTC m=+994.356287032" observedRunningTime="2026-01-31 16:46:50.072938768 +0000 UTC m=+996.878995684" watchObservedRunningTime="2026-01-31 16:46:50.073252296 +0000 UTC m=+996.879309202" Jan 31 16:46:50 crc kubenswrapper[4730]: W0131 16:46:50.088038 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e510754_1362_4ae1_9934_59a43324b2bf.slice/crio-a96382163bccd67c02f10bfdda66f93569da31157e9fc3878c07bb9948f16587 WatchSource:0}: Error finding container a96382163bccd67c02f10bfdda66f93569da31157e9fc3878c07bb9948f16587: Status 404 returned error can't find the container with id a96382163bccd67c02f10bfdda66f93569da31157e9fc3878c07bb9948f16587 Jan 31 16:46:51 crc kubenswrapper[4730]: I0131 16:46:51.028157 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b6cc64d78-7m9cj" event={"ID":"3e510754-1362-4ae1-9934-59a43324b2bf","Type":"ContainerStarted","Data":"a09b3f75e61dc0290ba98f3abcf0e07cbd83031d91d3f22cfc02791a705c0cb1"} Jan 31 16:46:51 crc kubenswrapper[4730]: I0131 16:46:51.028659 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b6cc64d78-7m9cj" event={"ID":"3e510754-1362-4ae1-9934-59a43324b2bf","Type":"ContainerStarted","Data":"05aa6c94e0a087125422d8b27780fb377d3c1ac5ceca86a2a344e3e5b0adb1c7"} Jan 31 16:46:51 crc kubenswrapper[4730]: I0131 16:46:51.028674 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b6cc64d78-7m9cj" event={"ID":"3e510754-1362-4ae1-9934-59a43324b2bf","Type":"ContainerStarted","Data":"a96382163bccd67c02f10bfdda66f93569da31157e9fc3878c07bb9948f16587"} Jan 31 16:46:51 crc kubenswrapper[4730]: I0131 16:46:51.029480 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:51 crc kubenswrapper[4730]: I0131 16:46:51.029521 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:46:51 crc kubenswrapper[4730]: I0131 16:46:51.030164 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-959768976-4n77c" event={"ID":"cae13f89-c09f-4e59-b3e5-7de6b4562d17","Type":"ContainerStarted","Data":"d79fe29d6b7b2f5dc336ccb6c5559eb6a4d8556905b8aced6a92c14d58e83596"} Jan 31 16:46:51 crc kubenswrapper[4730]: I0131 16:46:51.030201 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-959768976-4n77c" event={"ID":"cae13f89-c09f-4e59-b3e5-7de6b4562d17","Type":"ContainerStarted","Data":"636bd1a4f689eb8ea368a8d38ab04f75b9cd83f78eab14f2ef44f8420b69a5e0"} Jan 31 16:46:51 crc kubenswrapper[4730]: I0131 16:46:51.030780 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-959768976-4n77c" Jan 31 16:46:51 crc kubenswrapper[4730]: I0131 16:46:51.063547 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6b6cc64d78-7m9cj" podStartSLOduration=2.063530959 podStartE2EDuration="2.063530959s" podCreationTimestamp="2026-01-31 16:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:46:51.04660825 +0000 UTC m=+997.852665176" watchObservedRunningTime="2026-01-31 16:46:51.063530959 +0000 UTC m=+997.869587875" Jan 31 16:46:51 crc kubenswrapper[4730]: I0131 16:46:51.087138 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-959768976-4n77c" podStartSLOduration=3.087119732 podStartE2EDuration="3.087119732s" podCreationTimestamp="2026-01-31 16:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:46:51.08018162 +0000 UTC m=+997.886238536" watchObservedRunningTime="2026-01-31 16:46:51.087119732 +0000 UTC m=+997.893176648" Jan 31 16:46:52 crc kubenswrapper[4730]: I0131 16:46:52.038820 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-959768976-4n77c" Jan 31 16:46:52 crc kubenswrapper[4730]: I0131 16:46:52.469025 4730 scope.go:117] "RemoveContainer" containerID="a3b9aa96106c040897ae7759c8c7e37b4c35ba48dfb1207fcdec7d8f7b5bd348" Jan 31 16:46:52 crc kubenswrapper[4730]: I0131 16:46:52.469088 4730 scope.go:117] "RemoveContainer" containerID="1c717feb04948860ffe61e8e59ace1903fbec0985f999c6eca36640a682381f5" Jan 31 16:46:52 crc kubenswrapper[4730]: I0131 16:46:52.469172 4730 scope.go:117] "RemoveContainer" containerID="4447520ba8817b50d0ba6b0a6b8a105c7d93b20b57b9de2f464b92326c6a1549" Jan 31 16:46:52 crc kubenswrapper[4730]: E0131 16:46:52.469861 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:46:53 crc kubenswrapper[4730]: I0131 16:46:53.051726 4730 generic.go:334] "Generic (PLEG): container finished" podID="2fd279f9-efa4-4fb3-a6e0-655de1c20403" containerID="d98c34e03192a3f9bd62a9607de7d72a09e66464a566381ee903f28e2cd9c66e" exitCode=0 Jan 31 16:46:53 crc kubenswrapper[4730]: I0131 16:46:53.051815 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wkj2z" event={"ID":"2fd279f9-efa4-4fb3-a6e0-655de1c20403","Type":"ContainerDied","Data":"d98c34e03192a3f9bd62a9607de7d72a09e66464a566381ee903f28e2cd9c66e"} Jan 31 16:46:54 crc kubenswrapper[4730]: I0131 16:46:54.399288 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wkj2z" Jan 31 16:46:54 crc kubenswrapper[4730]: I0131 16:46:54.502614 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2fd279f9-efa4-4fb3-a6e0-655de1c20403-db-sync-config-data\") pod \"2fd279f9-efa4-4fb3-a6e0-655de1c20403\" (UID: \"2fd279f9-efa4-4fb3-a6e0-655de1c20403\") " Jan 31 16:46:54 crc kubenswrapper[4730]: I0131 16:46:54.502679 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd279f9-efa4-4fb3-a6e0-655de1c20403-combined-ca-bundle\") pod \"2fd279f9-efa4-4fb3-a6e0-655de1c20403\" (UID: \"2fd279f9-efa4-4fb3-a6e0-655de1c20403\") " Jan 31 16:46:54 crc kubenswrapper[4730]: I0131 16:46:54.502697 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5mqd\" (UniqueName: \"kubernetes.io/projected/2fd279f9-efa4-4fb3-a6e0-655de1c20403-kube-api-access-x5mqd\") pod \"2fd279f9-efa4-4fb3-a6e0-655de1c20403\" (UID: \"2fd279f9-efa4-4fb3-a6e0-655de1c20403\") " Jan 31 16:46:54 crc kubenswrapper[4730]: I0131 16:46:54.526957 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fd279f9-efa4-4fb3-a6e0-655de1c20403-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2fd279f9-efa4-4fb3-a6e0-655de1c20403" (UID: "2fd279f9-efa4-4fb3-a6e0-655de1c20403"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:54 crc kubenswrapper[4730]: I0131 16:46:54.531250 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fd279f9-efa4-4fb3-a6e0-655de1c20403-kube-api-access-x5mqd" (OuterVolumeSpecName: "kube-api-access-x5mqd") pod "2fd279f9-efa4-4fb3-a6e0-655de1c20403" (UID: "2fd279f9-efa4-4fb3-a6e0-655de1c20403"). InnerVolumeSpecName "kube-api-access-x5mqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:46:54 crc kubenswrapper[4730]: I0131 16:46:54.531971 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fd279f9-efa4-4fb3-a6e0-655de1c20403-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fd279f9-efa4-4fb3-a6e0-655de1c20403" (UID: "2fd279f9-efa4-4fb3-a6e0-655de1c20403"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:46:54 crc kubenswrapper[4730]: I0131 16:46:54.607478 4730 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2fd279f9-efa4-4fb3-a6e0-655de1c20403-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:54 crc kubenswrapper[4730]: I0131 16:46:54.607510 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd279f9-efa4-4fb3-a6e0-655de1c20403-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:54 crc kubenswrapper[4730]: I0131 16:46:54.607518 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5mqd\" (UniqueName: \"kubernetes.io/projected/2fd279f9-efa4-4fb3-a6e0-655de1c20403-kube-api-access-x5mqd\") on node \"crc\" DevicePath \"\"" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.066242 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wkj2z" event={"ID":"2fd279f9-efa4-4fb3-a6e0-655de1c20403","Type":"ContainerDied","Data":"f6636f03e325872a2851cee2d06ab60d21eea7051adeb3e114ef9f95ce5dc4b8"} Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.066276 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6636f03e325872a2851cee2d06ab60d21eea7051adeb3e114ef9f95ce5dc4b8" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.066345 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wkj2z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.343462 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-74c8bcbdc9-xg47w"] Jan 31 16:46:55 crc kubenswrapper[4730]: E0131 16:46:55.353996 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd279f9-efa4-4fb3-a6e0-655de1c20403" containerName="barbican-db-sync" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.354032 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd279f9-efa4-4fb3-a6e0-655de1c20403" containerName="barbican-db-sync" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.354400 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fd279f9-efa4-4fb3-a6e0-655de1c20403" containerName="barbican-db-sync" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.355369 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-74c8bcbdc9-xg47w" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.360196 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.360422 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nggww" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.360583 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.364333 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z"] Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.365933 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.373890 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.395000 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-74c8bcbdc9-xg47w"] Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.414889 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z"] Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.441609 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de24c449-9dfc-4e52-b571-ce305a73a1a7-config-data-custom\") pod \"barbican-worker-74c8bcbdc9-xg47w\" (UID: \"de24c449-9dfc-4e52-b571-ce305a73a1a7\") " pod="openstack/barbican-worker-74c8bcbdc9-xg47w" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.441866 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de24c449-9dfc-4e52-b571-ce305a73a1a7-logs\") pod \"barbican-worker-74c8bcbdc9-xg47w\" (UID: \"de24c449-9dfc-4e52-b571-ce305a73a1a7\") " pod="openstack/barbican-worker-74c8bcbdc9-xg47w" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.441937 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6t8j\" (UniqueName: \"kubernetes.io/projected/73aa808b-e690-4e00-b458-4d30965fe1f8-kube-api-access-w6t8j\") pod \"barbican-keystone-listener-7ffbbc76b4-9vr9z\" (UID: \"73aa808b-e690-4e00-b458-4d30965fe1f8\") " pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.442025 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqxj4\" (UniqueName: \"kubernetes.io/projected/de24c449-9dfc-4e52-b571-ce305a73a1a7-kube-api-access-zqxj4\") pod \"barbican-worker-74c8bcbdc9-xg47w\" (UID: \"de24c449-9dfc-4e52-b571-ce305a73a1a7\") " pod="openstack/barbican-worker-74c8bcbdc9-xg47w" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.442206 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aa808b-e690-4e00-b458-4d30965fe1f8-logs\") pod \"barbican-keystone-listener-7ffbbc76b4-9vr9z\" (UID: \"73aa808b-e690-4e00-b458-4d30965fe1f8\") " pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.442282 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de24c449-9dfc-4e52-b571-ce305a73a1a7-config-data\") pod \"barbican-worker-74c8bcbdc9-xg47w\" (UID: \"de24c449-9dfc-4e52-b571-ce305a73a1a7\") " pod="openstack/barbican-worker-74c8bcbdc9-xg47w" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.442351 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73aa808b-e690-4e00-b458-4d30965fe1f8-config-data-custom\") pod \"barbican-keystone-listener-7ffbbc76b4-9vr9z\" (UID: \"73aa808b-e690-4e00-b458-4d30965fe1f8\") " pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.442409 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de24c449-9dfc-4e52-b571-ce305a73a1a7-combined-ca-bundle\") pod \"barbican-worker-74c8bcbdc9-xg47w\" (UID: \"de24c449-9dfc-4e52-b571-ce305a73a1a7\") " pod="openstack/barbican-worker-74c8bcbdc9-xg47w" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.442481 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aa808b-e690-4e00-b458-4d30965fe1f8-config-data\") pod \"barbican-keystone-listener-7ffbbc76b4-9vr9z\" (UID: \"73aa808b-e690-4e00-b458-4d30965fe1f8\") " pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.442538 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aa808b-e690-4e00-b458-4d30965fe1f8-combined-ca-bundle\") pod \"barbican-keystone-listener-7ffbbc76b4-9vr9z\" (UID: \"73aa808b-e690-4e00-b458-4d30965fe1f8\") " pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.505718 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d649d8c65-rg8kd"] Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.507604 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.526443 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d649d8c65-rg8kd"] Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.543974 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aa808b-e690-4e00-b458-4d30965fe1f8-logs\") pod \"barbican-keystone-listener-7ffbbc76b4-9vr9z\" (UID: \"73aa808b-e690-4e00-b458-4d30965fe1f8\") " pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.544008 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de24c449-9dfc-4e52-b571-ce305a73a1a7-config-data\") pod \"barbican-worker-74c8bcbdc9-xg47w\" (UID: \"de24c449-9dfc-4e52-b571-ce305a73a1a7\") " pod="openstack/barbican-worker-74c8bcbdc9-xg47w" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.544034 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73aa808b-e690-4e00-b458-4d30965fe1f8-config-data-custom\") pod \"barbican-keystone-listener-7ffbbc76b4-9vr9z\" (UID: \"73aa808b-e690-4e00-b458-4d30965fe1f8\") " pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.544056 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de24c449-9dfc-4e52-b571-ce305a73a1a7-combined-ca-bundle\") pod \"barbican-worker-74c8bcbdc9-xg47w\" (UID: \"de24c449-9dfc-4e52-b571-ce305a73a1a7\") " pod="openstack/barbican-worker-74c8bcbdc9-xg47w" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.544086 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aa808b-e690-4e00-b458-4d30965fe1f8-config-data\") pod \"barbican-keystone-listener-7ffbbc76b4-9vr9z\" (UID: \"73aa808b-e690-4e00-b458-4d30965fe1f8\") " pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.544100 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aa808b-e690-4e00-b458-4d30965fe1f8-combined-ca-bundle\") pod \"barbican-keystone-listener-7ffbbc76b4-9vr9z\" (UID: \"73aa808b-e690-4e00-b458-4d30965fe1f8\") " pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.544146 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de24c449-9dfc-4e52-b571-ce305a73a1a7-config-data-custom\") pod \"barbican-worker-74c8bcbdc9-xg47w\" (UID: \"de24c449-9dfc-4e52-b571-ce305a73a1a7\") " pod="openstack/barbican-worker-74c8bcbdc9-xg47w" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.544161 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de24c449-9dfc-4e52-b571-ce305a73a1a7-logs\") pod \"barbican-worker-74c8bcbdc9-xg47w\" (UID: \"de24c449-9dfc-4e52-b571-ce305a73a1a7\") " pod="openstack/barbican-worker-74c8bcbdc9-xg47w" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.544176 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6t8j\" (UniqueName: \"kubernetes.io/projected/73aa808b-e690-4e00-b458-4d30965fe1f8-kube-api-access-w6t8j\") pod \"barbican-keystone-listener-7ffbbc76b4-9vr9z\" (UID: \"73aa808b-e690-4e00-b458-4d30965fe1f8\") " pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.544200 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqxj4\" (UniqueName: \"kubernetes.io/projected/de24c449-9dfc-4e52-b571-ce305a73a1a7-kube-api-access-zqxj4\") pod \"barbican-worker-74c8bcbdc9-xg47w\" (UID: \"de24c449-9dfc-4e52-b571-ce305a73a1a7\") " pod="openstack/barbican-worker-74c8bcbdc9-xg47w" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.544355 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aa808b-e690-4e00-b458-4d30965fe1f8-logs\") pod \"barbican-keystone-listener-7ffbbc76b4-9vr9z\" (UID: \"73aa808b-e690-4e00-b458-4d30965fe1f8\") " pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.544893 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de24c449-9dfc-4e52-b571-ce305a73a1a7-logs\") pod \"barbican-worker-74c8bcbdc9-xg47w\" (UID: \"de24c449-9dfc-4e52-b571-ce305a73a1a7\") " pod="openstack/barbican-worker-74c8bcbdc9-xg47w" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.549282 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aa808b-e690-4e00-b458-4d30965fe1f8-combined-ca-bundle\") pod \"barbican-keystone-listener-7ffbbc76b4-9vr9z\" (UID: \"73aa808b-e690-4e00-b458-4d30965fe1f8\") " pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.549291 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de24c449-9dfc-4e52-b571-ce305a73a1a7-config-data-custom\") pod \"barbican-worker-74c8bcbdc9-xg47w\" (UID: \"de24c449-9dfc-4e52-b571-ce305a73a1a7\") " pod="openstack/barbican-worker-74c8bcbdc9-xg47w" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.550433 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de24c449-9dfc-4e52-b571-ce305a73a1a7-combined-ca-bundle\") pod \"barbican-worker-74c8bcbdc9-xg47w\" (UID: \"de24c449-9dfc-4e52-b571-ce305a73a1a7\") " pod="openstack/barbican-worker-74c8bcbdc9-xg47w" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.559156 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aa808b-e690-4e00-b458-4d30965fe1f8-config-data\") pod \"barbican-keystone-listener-7ffbbc76b4-9vr9z\" (UID: \"73aa808b-e690-4e00-b458-4d30965fe1f8\") " pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.561495 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73aa808b-e690-4e00-b458-4d30965fe1f8-config-data-custom\") pod \"barbican-keystone-listener-7ffbbc76b4-9vr9z\" (UID: \"73aa808b-e690-4e00-b458-4d30965fe1f8\") " pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.570045 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de24c449-9dfc-4e52-b571-ce305a73a1a7-config-data\") pod \"barbican-worker-74c8bcbdc9-xg47w\" (UID: \"de24c449-9dfc-4e52-b571-ce305a73a1a7\") " pod="openstack/barbican-worker-74c8bcbdc9-xg47w" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.574761 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqxj4\" (UniqueName: \"kubernetes.io/projected/de24c449-9dfc-4e52-b571-ce305a73a1a7-kube-api-access-zqxj4\") pod \"barbican-worker-74c8bcbdc9-xg47w\" (UID: \"de24c449-9dfc-4e52-b571-ce305a73a1a7\") " pod="openstack/barbican-worker-74c8bcbdc9-xg47w" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.630281 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6t8j\" (UniqueName: \"kubernetes.io/projected/73aa808b-e690-4e00-b458-4d30965fe1f8-kube-api-access-w6t8j\") pod \"barbican-keystone-listener-7ffbbc76b4-9vr9z\" (UID: \"73aa808b-e690-4e00-b458-4d30965fe1f8\") " pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.648329 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-config\") pod \"dnsmasq-dns-7d649d8c65-rg8kd\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.648413 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-dns-svc\") pod \"dnsmasq-dns-7d649d8c65-rg8kd\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.648456 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-ovsdbserver-sb\") pod \"dnsmasq-dns-7d649d8c65-rg8kd\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.648490 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-ovsdbserver-nb\") pod \"dnsmasq-dns-7d649d8c65-rg8kd\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.648506 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gmft\" (UniqueName: \"kubernetes.io/projected/fadea706-a2c3-43dd-ba06-a43abab1f949-kube-api-access-6gmft\") pod \"dnsmasq-dns-7d649d8c65-rg8kd\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.678284 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-74c8bcbdc9-xg47w" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.703539 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.753024 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-config\") pod \"dnsmasq-dns-7d649d8c65-rg8kd\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.753261 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-dns-svc\") pod \"dnsmasq-dns-7d649d8c65-rg8kd\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.753469 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-ovsdbserver-sb\") pod \"dnsmasq-dns-7d649d8c65-rg8kd\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.753557 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-ovsdbserver-nb\") pod \"dnsmasq-dns-7d649d8c65-rg8kd\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.753620 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gmft\" (UniqueName: \"kubernetes.io/projected/fadea706-a2c3-43dd-ba06-a43abab1f949-kube-api-access-6gmft\") pod \"dnsmasq-dns-7d649d8c65-rg8kd\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.754195 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-config\") pod \"dnsmasq-dns-7d649d8c65-rg8kd\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.754682 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-ovsdbserver-sb\") pod \"dnsmasq-dns-7d649d8c65-rg8kd\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.755008 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-ovsdbserver-nb\") pod \"dnsmasq-dns-7d649d8c65-rg8kd\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.755300 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-dns-svc\") pod \"dnsmasq-dns-7d649d8c65-rg8kd\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.817533 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gmft\" (UniqueName: \"kubernetes.io/projected/fadea706-a2c3-43dd-ba06-a43abab1f949-kube-api-access-6gmft\") pod \"dnsmasq-dns-7d649d8c65-rg8kd\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.849116 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.893486 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7567bc6486-x2ktx"] Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.903938 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.917554 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.950189 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7567bc6486-x2ktx"] Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.959274 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-combined-ca-bundle\") pod \"barbican-api-7567bc6486-x2ktx\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.959315 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f466831-6be5-42f8-85cc-a170c90ad516-logs\") pod \"barbican-api-7567bc6486-x2ktx\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.959356 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-config-data\") pod \"barbican-api-7567bc6486-x2ktx\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.959432 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-config-data-custom\") pod \"barbican-api-7567bc6486-x2ktx\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:46:55 crc kubenswrapper[4730]: I0131 16:46:55.959460 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v97mt\" (UniqueName: \"kubernetes.io/projected/1f466831-6be5-42f8-85cc-a170c90ad516-kube-api-access-v97mt\") pod \"barbican-api-7567bc6486-x2ktx\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:46:56 crc kubenswrapper[4730]: I0131 16:46:56.063098 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-combined-ca-bundle\") pod \"barbican-api-7567bc6486-x2ktx\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:46:56 crc kubenswrapper[4730]: I0131 16:46:56.063314 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f466831-6be5-42f8-85cc-a170c90ad516-logs\") pod \"barbican-api-7567bc6486-x2ktx\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:46:56 crc kubenswrapper[4730]: I0131 16:46:56.063397 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-config-data\") pod \"barbican-api-7567bc6486-x2ktx\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:46:56 crc kubenswrapper[4730]: I0131 16:46:56.063516 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-config-data-custom\") pod \"barbican-api-7567bc6486-x2ktx\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:46:56 crc kubenswrapper[4730]: I0131 16:46:56.063603 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v97mt\" (UniqueName: \"kubernetes.io/projected/1f466831-6be5-42f8-85cc-a170c90ad516-kube-api-access-v97mt\") pod \"barbican-api-7567bc6486-x2ktx\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:46:56 crc kubenswrapper[4730]: I0131 16:46:56.064348 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f466831-6be5-42f8-85cc-a170c90ad516-logs\") pod \"barbican-api-7567bc6486-x2ktx\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:46:56 crc kubenswrapper[4730]: I0131 16:46:56.072528 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-config-data-custom\") pod \"barbican-api-7567bc6486-x2ktx\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:46:56 crc kubenswrapper[4730]: I0131 16:46:56.085959 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-combined-ca-bundle\") pod \"barbican-api-7567bc6486-x2ktx\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:46:56 crc kubenswrapper[4730]: I0131 16:46:56.091781 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-config-data\") pod \"barbican-api-7567bc6486-x2ktx\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:46:56 crc kubenswrapper[4730]: I0131 16:46:56.138369 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v97mt\" (UniqueName: \"kubernetes.io/projected/1f466831-6be5-42f8-85cc-a170c90ad516-kube-api-access-v97mt\") pod \"barbican-api-7567bc6486-x2ktx\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:46:56 crc kubenswrapper[4730]: I0131 16:46:56.226231 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:46:56 crc kubenswrapper[4730]: I0131 16:46:56.620843 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7788464654-cr95d" podUID="0374cd2d-1d23-4f00-893a-278af887d99b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 31 16:46:56 crc kubenswrapper[4730]: I0131 16:46:56.734047 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-b5bd455fb-h66br" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 31 16:46:56 crc kubenswrapper[4730]: I0131 16:46:56.975385 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:46:56 crc kubenswrapper[4730]: I0131 16:46:56.975441 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:46:57 crc kubenswrapper[4730]: I0131 16:46:57.157024 4730 generic.go:334] "Generic (PLEG): container finished" podID="53655839-53b2-46cb-b859-fdb3376bc398" containerID="8aca09008a0d1c8b61f105f17f9581ec956efa657ae788587ccb0e38e29e1a05" exitCode=0 Jan 31 16:46:57 crc kubenswrapper[4730]: I0131 16:46:57.157166 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xfklz" event={"ID":"53655839-53b2-46cb-b859-fdb3376bc398","Type":"ContainerDied","Data":"8aca09008a0d1c8b61f105f17f9581ec956efa657ae788587ccb0e38e29e1a05"} Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.440213 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.440497 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.504907 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.562563 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-dc5f7996-jrfrx"] Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.564037 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.579574 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.579747 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.583348 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.614898 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-dc5f7996-jrfrx"] Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.711874 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd701548-630f-4a34-be15-e97ed8699a34-internal-tls-certs\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.711927 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd701548-630f-4a34-be15-e97ed8699a34-logs\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.711955 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd701548-630f-4a34-be15-e97ed8699a34-config-data\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.711993 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd701548-630f-4a34-be15-e97ed8699a34-combined-ca-bundle\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.712031 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd701548-630f-4a34-be15-e97ed8699a34-config-data-custom\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.712074 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd701548-630f-4a34-be15-e97ed8699a34-public-tls-certs\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.712096 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blgv2\" (UniqueName: \"kubernetes.io/projected/fd701548-630f-4a34-be15-e97ed8699a34-kube-api-access-blgv2\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.814106 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd701548-630f-4a34-be15-e97ed8699a34-public-tls-certs\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.814150 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blgv2\" (UniqueName: \"kubernetes.io/projected/fd701548-630f-4a34-be15-e97ed8699a34-kube-api-access-blgv2\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.814548 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd701548-630f-4a34-be15-e97ed8699a34-internal-tls-certs\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.814584 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd701548-630f-4a34-be15-e97ed8699a34-logs\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.814927 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd701548-630f-4a34-be15-e97ed8699a34-config-data\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.814969 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd701548-630f-4a34-be15-e97ed8699a34-combined-ca-bundle\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.815005 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd701548-630f-4a34-be15-e97ed8699a34-config-data-custom\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.815302 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd701548-630f-4a34-be15-e97ed8699a34-logs\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.824350 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd701548-630f-4a34-be15-e97ed8699a34-combined-ca-bundle\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.824735 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd701548-630f-4a34-be15-e97ed8699a34-public-tls-certs\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.832846 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd701548-630f-4a34-be15-e97ed8699a34-internal-tls-certs\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.833015 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd701548-630f-4a34-be15-e97ed8699a34-config-data-custom\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.835079 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd701548-630f-4a34-be15-e97ed8699a34-config-data\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.842265 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blgv2\" (UniqueName: \"kubernetes.io/projected/fd701548-630f-4a34-be15-e97ed8699a34-kube-api-access-blgv2\") pod \"barbican-api-dc5f7996-jrfrx\" (UID: \"fd701548-630f-4a34-be15-e97ed8699a34\") " pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:58 crc kubenswrapper[4730]: I0131 16:46:58.904938 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:46:59 crc kubenswrapper[4730]: I0131 16:46:59.178937 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 16:46:59 crc kubenswrapper[4730]: I0131 16:46:59.179197 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 16:47:01 crc kubenswrapper[4730]: I0131 16:47:01.195498 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 16:47:01 crc kubenswrapper[4730]: I0131 16:47:01.195744 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.092238 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xfklz" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.174459 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53655839-53b2-46cb-b859-fdb3376bc398-etc-machine-id\") pod \"53655839-53b2-46cb-b859-fdb3376bc398\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.174509 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98jbd\" (UniqueName: \"kubernetes.io/projected/53655839-53b2-46cb-b859-fdb3376bc398-kube-api-access-98jbd\") pod \"53655839-53b2-46cb-b859-fdb3376bc398\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.174585 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-combined-ca-bundle\") pod \"53655839-53b2-46cb-b859-fdb3376bc398\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.174608 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-db-sync-config-data\") pod \"53655839-53b2-46cb-b859-fdb3376bc398\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.174648 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-scripts\") pod \"53655839-53b2-46cb-b859-fdb3376bc398\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.174704 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-config-data\") pod \"53655839-53b2-46cb-b859-fdb3376bc398\" (UID: \"53655839-53b2-46cb-b859-fdb3376bc398\") " Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.175241 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53655839-53b2-46cb-b859-fdb3376bc398-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "53655839-53b2-46cb-b859-fdb3376bc398" (UID: "53655839-53b2-46cb-b859-fdb3376bc398"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.180588 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53655839-53b2-46cb-b859-fdb3376bc398-kube-api-access-98jbd" (OuterVolumeSpecName: "kube-api-access-98jbd") pod "53655839-53b2-46cb-b859-fdb3376bc398" (UID: "53655839-53b2-46cb-b859-fdb3376bc398"). InnerVolumeSpecName "kube-api-access-98jbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.182521 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-scripts" (OuterVolumeSpecName: "scripts") pod "53655839-53b2-46cb-b859-fdb3376bc398" (UID: "53655839-53b2-46cb-b859-fdb3376bc398"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.191017 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "53655839-53b2-46cb-b859-fdb3376bc398" (UID: "53655839-53b2-46cb-b859-fdb3376bc398"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.203949 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53655839-53b2-46cb-b859-fdb3376bc398" (UID: "53655839-53b2-46cb-b859-fdb3376bc398"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.225328 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xfklz" event={"ID":"53655839-53b2-46cb-b859-fdb3376bc398","Type":"ContainerDied","Data":"2a7219267bc555578d6669955a93307bda992ce779e73c38b4b618299a35f514"} Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.225361 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a7219267bc555578d6669955a93307bda992ce779e73c38b4b618299a35f514" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.225415 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xfklz" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.242356 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-config-data" (OuterVolumeSpecName: "config-data") pod "53655839-53b2-46cb-b859-fdb3376bc398" (UID: "53655839-53b2-46cb-b859-fdb3376bc398"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.278267 4730 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53655839-53b2-46cb-b859-fdb3376bc398-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.278307 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98jbd\" (UniqueName: \"kubernetes.io/projected/53655839-53b2-46cb-b859-fdb3376bc398-kube-api-access-98jbd\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.278316 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.278324 4730 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.278333 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.278341 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53655839-53b2-46cb-b859-fdb3376bc398-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.887180 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.887949 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 16:47:02 crc kubenswrapper[4730]: I0131 16:47:02.897211 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.436453 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 16:47:03 crc kubenswrapper[4730]: E0131 16:47:03.436868 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53655839-53b2-46cb-b859-fdb3376bc398" containerName="cinder-db-sync" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.436886 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="53655839-53b2-46cb-b859-fdb3376bc398" containerName="cinder-db-sync" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.437096 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="53655839-53b2-46cb-b859-fdb3376bc398" containerName="cinder-db-sync" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.438286 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.443145 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.443219 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.443374 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hdlj2" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.443655 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.473934 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d649d8c65-rg8kd"] Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.493398 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.527477 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57fff66767-t7tcb"] Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.528971 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.529225 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-config-data\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.529272 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e61373e-9345-4a2a-a252-15b10ed8ae59-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.529318 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-scripts\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.529342 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntlbv\" (UniqueName: \"kubernetes.io/projected/8e61373e-9345-4a2a-a252-15b10ed8ae59-kube-api-access-ntlbv\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.529402 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.529418 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.549283 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57fff66767-t7tcb"] Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.631979 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-config\") pod \"dnsmasq-dns-57fff66767-t7tcb\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.632021 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-config-data\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.632043 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-dns-svc\") pod \"dnsmasq-dns-57fff66767-t7tcb\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.632065 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e61373e-9345-4a2a-a252-15b10ed8ae59-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.632106 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn86p\" (UniqueName: \"kubernetes.io/projected/9b9e6ee1-bfce-461b-a098-9444b2203023-kube-api-access-cn86p\") pod \"dnsmasq-dns-57fff66767-t7tcb\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.632124 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-scripts\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.632147 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntlbv\" (UniqueName: \"kubernetes.io/projected/8e61373e-9345-4a2a-a252-15b10ed8ae59-kube-api-access-ntlbv\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.632178 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-ovsdbserver-sb\") pod \"dnsmasq-dns-57fff66767-t7tcb\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.632194 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-ovsdbserver-nb\") pod \"dnsmasq-dns-57fff66767-t7tcb\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.632241 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.632256 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.633455 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e61373e-9345-4a2a-a252-15b10ed8ae59-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.655790 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.666232 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-scripts\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.669753 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-config-data\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.670394 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.688759 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntlbv\" (UniqueName: \"kubernetes.io/projected/8e61373e-9345-4a2a-a252-15b10ed8ae59-kube-api-access-ntlbv\") pod \"cinder-scheduler-0\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.734872 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn86p\" (UniqueName: \"kubernetes.io/projected/9b9e6ee1-bfce-461b-a098-9444b2203023-kube-api-access-cn86p\") pod \"dnsmasq-dns-57fff66767-t7tcb\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.734936 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-ovsdbserver-sb\") pod \"dnsmasq-dns-57fff66767-t7tcb\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.734953 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-ovsdbserver-nb\") pod \"dnsmasq-dns-57fff66767-t7tcb\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.735059 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-config\") pod \"dnsmasq-dns-57fff66767-t7tcb\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.735081 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-dns-svc\") pod \"dnsmasq-dns-57fff66767-t7tcb\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.735902 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-dns-svc\") pod \"dnsmasq-dns-57fff66767-t7tcb\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.736464 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-ovsdbserver-nb\") pod \"dnsmasq-dns-57fff66767-t7tcb\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.736571 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-ovsdbserver-sb\") pod \"dnsmasq-dns-57fff66767-t7tcb\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.737716 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-config\") pod \"dnsmasq-dns-57fff66767-t7tcb\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.746367 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.747790 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.768478 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.769491 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.802458 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.809074 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn86p\" (UniqueName: \"kubernetes.io/projected/9b9e6ee1-bfce-461b-a098-9444b2203023-kube-api-access-cn86p\") pod \"dnsmasq-dns-57fff66767-t7tcb\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.836218 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-scripts\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.836441 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-config-data\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.836522 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-config-data-custom\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.836636 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72e56a4c-3291-45de-9dbe-da8f3ef14129-logs\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.836719 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thbfs\" (UniqueName: \"kubernetes.io/projected/72e56a4c-3291-45de-9dbe-da8f3ef14129-kube-api-access-thbfs\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.836788 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.836905 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72e56a4c-3291-45de-9dbe-da8f3ef14129-etc-machine-id\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.864365 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.938656 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-config-data\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.938706 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-config-data-custom\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.938773 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72e56a4c-3291-45de-9dbe-da8f3ef14129-logs\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.938793 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thbfs\" (UniqueName: \"kubernetes.io/projected/72e56a4c-3291-45de-9dbe-da8f3ef14129-kube-api-access-thbfs\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.938827 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.938930 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72e56a4c-3291-45de-9dbe-da8f3ef14129-etc-machine-id\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.939015 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-scripts\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.939205 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72e56a4c-3291-45de-9dbe-da8f3ef14129-logs\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.941234 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72e56a4c-3291-45de-9dbe-da8f3ef14129-etc-machine-id\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.948271 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.949465 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-config-data-custom\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.955495 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-config-data\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.969963 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-scripts\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:03 crc kubenswrapper[4730]: I0131 16:47:03.972893 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thbfs\" (UniqueName: \"kubernetes.io/projected/72e56a4c-3291-45de-9dbe-da8f3ef14129-kube-api-access-thbfs\") pod \"cinder-api-0\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " pod="openstack/cinder-api-0" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.061181 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.143425 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.512875 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f7c76d449-mtwzd"] Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.513297 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f7c76d449-mtwzd" podUID="2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" containerName="neutron-api" containerID="cri-o://0fe8cc2ff85f09e05318581d4516d9956824f119e043d68a882c1f60cf68181d" gracePeriod=30 Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.513666 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f7c76d449-mtwzd" podUID="2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" containerName="neutron-httpd" containerID="cri-o://233ceb1cdebc0314f0aa2c4b072811d20f666c035ea555f97792170c01fefd77" gracePeriod=30 Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.539192 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c4d975ccf-jbdgk"] Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.540620 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.595065 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6f7c76d449-mtwzd" podUID="2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.158:9696/\": EOF" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.612857 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c4d975ccf-jbdgk"] Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.669932 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-httpd-config\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.669978 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-config\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.670031 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjw6x\" (UniqueName: \"kubernetes.io/projected/ce037144-daeb-412d-94f1-69bc4ed97935-kube-api-access-vjw6x\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.670090 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-internal-tls-certs\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.670134 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-public-tls-certs\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.670167 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-combined-ca-bundle\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.670182 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-ovndb-tls-certs\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.776224 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjw6x\" (UniqueName: \"kubernetes.io/projected/ce037144-daeb-412d-94f1-69bc4ed97935-kube-api-access-vjw6x\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.776314 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-internal-tls-certs\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.776367 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-public-tls-certs\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.776407 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-combined-ca-bundle\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.776444 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-ovndb-tls-certs\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.776518 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-httpd-config\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.776538 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-config\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.787199 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-httpd-config\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.789422 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-public-tls-certs\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.790458 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-combined-ca-bundle\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.792522 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-config\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.816465 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-internal-tls-certs\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.818931 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce037144-daeb-412d-94f1-69bc4ed97935-ovndb-tls-certs\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.850544 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjw6x\" (UniqueName: \"kubernetes.io/projected/ce037144-daeb-412d-94f1-69bc4ed97935-kube-api-access-vjw6x\") pod \"neutron-c4d975ccf-jbdgk\" (UID: \"ce037144-daeb-412d-94f1-69bc4ed97935\") " pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:04 crc kubenswrapper[4730]: I0131 16:47:04.883018 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:05 crc kubenswrapper[4730]: I0131 16:47:05.076632 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-74c8bcbdc9-xg47w"] Jan 31 16:47:05 crc kubenswrapper[4730]: I0131 16:47:05.121981 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z"] Jan 31 16:47:05 crc kubenswrapper[4730]: W0131 16:47:05.179094 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde24c449_9dfc_4e52_b571_ce305a73a1a7.slice/crio-b56b44df83b9edd36c237a6f0d37b413a521451a8187910c5b2b200fe1002adc WatchSource:0}: Error finding container b56b44df83b9edd36c237a6f0d37b413a521451a8187910c5b2b200fe1002adc: Status 404 returned error can't find the container with id b56b44df83b9edd36c237a6f0d37b413a521451a8187910c5b2b200fe1002adc Jan 31 16:47:05 crc kubenswrapper[4730]: I0131 16:47:05.295349 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-74c8bcbdc9-xg47w" event={"ID":"de24c449-9dfc-4e52-b571-ce305a73a1a7","Type":"ContainerStarted","Data":"b56b44df83b9edd36c237a6f0d37b413a521451a8187910c5b2b200fe1002adc"} Jan 31 16:47:05 crc kubenswrapper[4730]: I0131 16:47:05.303966 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" event={"ID":"73aa808b-e690-4e00-b458-4d30965fe1f8","Type":"ContainerStarted","Data":"4a57d9cd03f3efc38c2f0ba57c69e77b87775058157e446bec116ce5097a917a"} Jan 31 16:47:05 crc kubenswrapper[4730]: E0131 16:47:05.357157 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="f0d3583d-f56f-4f4b-87cb-e748976d47f6" Jan 31 16:47:05 crc kubenswrapper[4730]: I0131 16:47:05.441545 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d649d8c65-rg8kd"] Jan 31 16:47:05 crc kubenswrapper[4730]: I0131 16:47:05.554462 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7567bc6486-x2ktx"] Jan 31 16:47:05 crc kubenswrapper[4730]: W0131 16:47:05.613271 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f466831_6be5_42f8_85cc_a170c90ad516.slice/crio-03d261bdb7b9ad2c931f4cfad3d37d7e73b910299458b30defff9ec723576308 WatchSource:0}: Error finding container 03d261bdb7b9ad2c931f4cfad3d37d7e73b910299458b30defff9ec723576308: Status 404 returned error can't find the container with id 03d261bdb7b9ad2c931f4cfad3d37d7e73b910299458b30defff9ec723576308 Jan 31 16:47:05 crc kubenswrapper[4730]: I0131 16:47:05.983359 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57fff66767-t7tcb"] Jan 31 16:47:05 crc kubenswrapper[4730]: I0131 16:47:05.995949 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.008885 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.030266 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.051260 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-dc5f7996-jrfrx"] Jan 31 16:47:06 crc kubenswrapper[4730]: W0131 16:47:06.072915 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd701548_630f_4a34_be15_e97ed8699a34.slice/crio-3eada0b7357a05fbdead4e0b563412c90d68ab2da0c4e65a7fb3f56abf4844b4 WatchSource:0}: Error finding container 3eada0b7357a05fbdead4e0b563412c90d68ab2da0c4e65a7fb3f56abf4844b4: Status 404 returned error can't find the container with id 3eada0b7357a05fbdead4e0b563412c90d68ab2da0c4e65a7fb3f56abf4844b4 Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.199109 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c4d975ccf-jbdgk"] Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.338008 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8e61373e-9345-4a2a-a252-15b10ed8ae59","Type":"ContainerStarted","Data":"d684d350362a86a8a895e1f5b9f51e0a2e37ecbf4160f0454b727f14afc23035"} Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.343079 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0d3583d-f56f-4f4b-87cb-e748976d47f6","Type":"ContainerStarted","Data":"256a4a8f260726f452ebad1150c68e0fa7a874c4e29cda9989e22e6bfb148659"} Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.343393 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f0d3583d-f56f-4f4b-87cb-e748976d47f6" containerName="ceilometer-notification-agent" containerID="cri-o://6dc2a926bdd3745332bd1dcaf7a8d96116f53c861159a769b35fe63153b415e7" gracePeriod=30 Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.343489 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.343548 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f0d3583d-f56f-4f4b-87cb-e748976d47f6" containerName="sg-core" containerID="cri-o://b9f9052e94db5bc04e100f5fe656802b63b022bfdcfb7ba3c44aa0250a6d30b9" gracePeriod=30 Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.343527 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f0d3583d-f56f-4f4b-87cb-e748976d47f6" containerName="proxy-httpd" containerID="cri-o://256a4a8f260726f452ebad1150c68e0fa7a874c4e29cda9989e22e6bfb148659" gracePeriod=30 Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.347762 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72e56a4c-3291-45de-9dbe-da8f3ef14129","Type":"ContainerStarted","Data":"acea7a41c528051854271c65b0e274b064e1267b9c59247a4b8cf64fb1664ed7"} Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.379105 4730 generic.go:334] "Generic (PLEG): container finished" podID="2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" containerID="233ceb1cdebc0314f0aa2c4b072811d20f666c035ea555f97792170c01fefd77" exitCode=0 Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.379322 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f7c76d449-mtwzd" event={"ID":"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf","Type":"ContainerDied","Data":"233ceb1cdebc0314f0aa2c4b072811d20f666c035ea555f97792170c01fefd77"} Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.386987 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fff66767-t7tcb" event={"ID":"9b9e6ee1-bfce-461b-a098-9444b2203023","Type":"ContainerStarted","Data":"e40741959f74e084c4a846a196b54d55090cf2c3586382d3cd67e06cbac7ed32"} Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.390092 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c4d975ccf-jbdgk" event={"ID":"ce037144-daeb-412d-94f1-69bc4ed97935","Type":"ContainerStarted","Data":"632d8f1152a0d88ff40f9e508a9b51e2d03a1c6fc0655da42bff35d09788ca26"} Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.413622 4730 generic.go:334] "Generic (PLEG): container finished" podID="00791e2a-6f2b-450d-acab-1ac4b91656ea" containerID="86a7791b7f970b4c2e27e68a5edfed6a34033cd7cd6bd79d79246be431e08272" exitCode=137 Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.413651 4730 generic.go:334] "Generic (PLEG): container finished" podID="00791e2a-6f2b-450d-acab-1ac4b91656ea" containerID="2dc6ce954598db57e2003a858ae5ba8949d40ef77652fb4e121d946900bfba08" exitCode=137 Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.413719 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69df784bcc-98p6s" event={"ID":"00791e2a-6f2b-450d-acab-1ac4b91656ea","Type":"ContainerDied","Data":"86a7791b7f970b4c2e27e68a5edfed6a34033cd7cd6bd79d79246be431e08272"} Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.419017 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69df784bcc-98p6s" event={"ID":"00791e2a-6f2b-450d-acab-1ac4b91656ea","Type":"ContainerDied","Data":"2dc6ce954598db57e2003a858ae5ba8949d40ef77652fb4e121d946900bfba08"} Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.436110 4730 generic.go:334] "Generic (PLEG): container finished" podID="fadea706-a2c3-43dd-ba06-a43abab1f949" containerID="7e63dc004c6b1eeca8620488954583d718bd7bcff00c525c3c64b641de9c7866" exitCode=0 Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.436197 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" event={"ID":"fadea706-a2c3-43dd-ba06-a43abab1f949","Type":"ContainerDied","Data":"7e63dc004c6b1eeca8620488954583d718bd7bcff00c525c3c64b641de9c7866"} Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.436258 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" event={"ID":"fadea706-a2c3-43dd-ba06-a43abab1f949","Type":"ContainerStarted","Data":"7d0266fbe2a644b3e4e11501eb08c9e7b21b81aeaf86cd8569c1acf69f77e87b"} Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.447350 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dc5f7996-jrfrx" event={"ID":"fd701548-630f-4a34-be15-e97ed8699a34","Type":"ContainerStarted","Data":"3eada0b7357a05fbdead4e0b563412c90d68ab2da0c4e65a7fb3f56abf4844b4"} Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.456861 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7567bc6486-x2ktx" event={"ID":"1f466831-6be5-42f8-85cc-a170c90ad516","Type":"ContainerStarted","Data":"03d261bdb7b9ad2c931f4cfad3d37d7e73b910299458b30defff9ec723576308"} Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.620277 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7788464654-cr95d" podUID="0374cd2d-1d23-4f00-893a-278af887d99b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.620337 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7788464654-cr95d" Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.620979 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"91e328665f0dfb9fb05ca0d20e6343eb8d7f25e993535ec02909c8c02411ff47"} pod="openstack/horizon-7788464654-cr95d" containerMessage="Container horizon failed startup probe, will be restarted" Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.621005 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7788464654-cr95d" podUID="0374cd2d-1d23-4f00-893a-278af887d99b" containerName="horizon" containerID="cri-o://91e328665f0dfb9fb05ca0d20e6343eb8d7f25e993535ec02909c8c02411ff47" gracePeriod=30 Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.738770 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-b5bd455fb-h66br" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.738851 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.739509 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"5f76ea53478fba62d51bf2177248f8d97c1edacf725d569c9a1e0b691cca8300"} pod="openstack/horizon-b5bd455fb-h66br" containerMessage="Container horizon failed startup probe, will be restarted" Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.739538 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-b5bd455fb-h66br" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon" containerID="cri-o://5f76ea53478fba62d51bf2177248f8d97c1edacf725d569c9a1e0b691cca8300" gracePeriod=30 Jan 31 16:47:06 crc kubenswrapper[4730]: I0131 16:47:06.946682 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.004950 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.065937 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00791e2a-6f2b-450d-acab-1ac4b91656ea-config-data\") pod \"00791e2a-6f2b-450d-acab-1ac4b91656ea\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.066041 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72457\" (UniqueName: \"kubernetes.io/projected/00791e2a-6f2b-450d-acab-1ac4b91656ea-kube-api-access-72457\") pod \"00791e2a-6f2b-450d-acab-1ac4b91656ea\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.066204 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00791e2a-6f2b-450d-acab-1ac4b91656ea-horizon-secret-key\") pod \"00791e2a-6f2b-450d-acab-1ac4b91656ea\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.066233 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00791e2a-6f2b-450d-acab-1ac4b91656ea-logs\") pod \"00791e2a-6f2b-450d-acab-1ac4b91656ea\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.066260 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00791e2a-6f2b-450d-acab-1ac4b91656ea-scripts\") pod \"00791e2a-6f2b-450d-acab-1ac4b91656ea\" (UID: \"00791e2a-6f2b-450d-acab-1ac4b91656ea\") " Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.066907 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00791e2a-6f2b-450d-acab-1ac4b91656ea-logs" (OuterVolumeSpecName: "logs") pod "00791e2a-6f2b-450d-acab-1ac4b91656ea" (UID: "00791e2a-6f2b-450d-acab-1ac4b91656ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.079223 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00791e2a-6f2b-450d-acab-1ac4b91656ea-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "00791e2a-6f2b-450d-acab-1ac4b91656ea" (UID: "00791e2a-6f2b-450d-acab-1ac4b91656ea"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.081379 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00791e2a-6f2b-450d-acab-1ac4b91656ea-kube-api-access-72457" (OuterVolumeSpecName: "kube-api-access-72457") pod "00791e2a-6f2b-450d-acab-1ac4b91656ea" (UID: "00791e2a-6f2b-450d-acab-1ac4b91656ea"). InnerVolumeSpecName "kube-api-access-72457". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.138248 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00791e2a-6f2b-450d-acab-1ac4b91656ea-scripts" (OuterVolumeSpecName: "scripts") pod "00791e2a-6f2b-450d-acab-1ac4b91656ea" (UID: "00791e2a-6f2b-450d-acab-1ac4b91656ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.148707 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00791e2a-6f2b-450d-acab-1ac4b91656ea-config-data" (OuterVolumeSpecName: "config-data") pod "00791e2a-6f2b-450d-acab-1ac4b91656ea" (UID: "00791e2a-6f2b-450d-acab-1ac4b91656ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.168026 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-config\") pod \"fadea706-a2c3-43dd-ba06-a43abab1f949\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.168394 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-dns-svc\") pod \"fadea706-a2c3-43dd-ba06-a43abab1f949\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.168470 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-ovsdbserver-nb\") pod \"fadea706-a2c3-43dd-ba06-a43abab1f949\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.168512 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gmft\" (UniqueName: \"kubernetes.io/projected/fadea706-a2c3-43dd-ba06-a43abab1f949-kube-api-access-6gmft\") pod \"fadea706-a2c3-43dd-ba06-a43abab1f949\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.169005 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-ovsdbserver-sb\") pod \"fadea706-a2c3-43dd-ba06-a43abab1f949\" (UID: \"fadea706-a2c3-43dd-ba06-a43abab1f949\") " Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.169507 4730 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00791e2a-6f2b-450d-acab-1ac4b91656ea-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.169525 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00791e2a-6f2b-450d-acab-1ac4b91656ea-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.169556 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00791e2a-6f2b-450d-acab-1ac4b91656ea-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.169565 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00791e2a-6f2b-450d-acab-1ac4b91656ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.169575 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72457\" (UniqueName: \"kubernetes.io/projected/00791e2a-6f2b-450d-acab-1ac4b91656ea-kube-api-access-72457\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.179026 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fadea706-a2c3-43dd-ba06-a43abab1f949-kube-api-access-6gmft" (OuterVolumeSpecName: "kube-api-access-6gmft") pod "fadea706-a2c3-43dd-ba06-a43abab1f949" (UID: "fadea706-a2c3-43dd-ba06-a43abab1f949"). InnerVolumeSpecName "kube-api-access-6gmft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.200344 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fadea706-a2c3-43dd-ba06-a43abab1f949" (UID: "fadea706-a2c3-43dd-ba06-a43abab1f949"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.218261 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fadea706-a2c3-43dd-ba06-a43abab1f949" (UID: "fadea706-a2c3-43dd-ba06-a43abab1f949"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.243956 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-config" (OuterVolumeSpecName: "config") pod "fadea706-a2c3-43dd-ba06-a43abab1f949" (UID: "fadea706-a2c3-43dd-ba06-a43abab1f949"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.270978 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.271005 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.271014 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.271023 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gmft\" (UniqueName: \"kubernetes.io/projected/fadea706-a2c3-43dd-ba06-a43abab1f949-kube-api-access-6gmft\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.306466 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fadea706-a2c3-43dd-ba06-a43abab1f949" (UID: "fadea706-a2c3-43dd-ba06-a43abab1f949"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.373483 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fadea706-a2c3-43dd-ba06-a43abab1f949-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.466100 4730 scope.go:117] "RemoveContainer" containerID="a3b9aa96106c040897ae7759c8c7e37b4c35ba48dfb1207fcdec7d8f7b5bd348" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.466167 4730 scope.go:117] "RemoveContainer" containerID="1c717feb04948860ffe61e8e59ace1903fbec0985f999c6eca36640a682381f5" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.466249 4730 scope.go:117] "RemoveContainer" containerID="4447520ba8817b50d0ba6b0a6b8a105c7d93b20b57b9de2f464b92326c6a1549" Jan 31 16:47:07 crc kubenswrapper[4730]: E0131 16:47:07.466509 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.478458 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" event={"ID":"fadea706-a2c3-43dd-ba06-a43abab1f949","Type":"ContainerDied","Data":"7d0266fbe2a644b3e4e11501eb08c9e7b21b81aeaf86cd8569c1acf69f77e87b"} Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.478521 4730 scope.go:117] "RemoveContainer" containerID="7e63dc004c6b1eeca8620488954583d718bd7bcff00c525c3c64b641de9c7866" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.478621 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d649d8c65-rg8kd" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.493148 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dc5f7996-jrfrx" event={"ID":"fd701548-630f-4a34-be15-e97ed8699a34","Type":"ContainerStarted","Data":"5ffe3fb46d8c8a67b434a559c212e29427ff5f87d5d874c118b1477e7b8252cb"} Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.493189 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dc5f7996-jrfrx" event={"ID":"fd701548-630f-4a34-be15-e97ed8699a34","Type":"ContainerStarted","Data":"c248ee030af7a90c1c96395af7f96351a0aed48bdfcdcc55e16addd5c8b30810"} Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.521038 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7567bc6486-x2ktx" event={"ID":"1f466831-6be5-42f8-85cc-a170c90ad516","Type":"ContainerStarted","Data":"e26a31f0001a4fea1ac7d20702847e80592a1c92fc4ff9a177ad3ad6f7191596"} Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.591671 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d649d8c65-rg8kd"] Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.592526 4730 generic.go:334] "Generic (PLEG): container finished" podID="2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" containerID="0fe8cc2ff85f09e05318581d4516d9956824f119e043d68a882c1f60cf68181d" exitCode=0 Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.592628 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f7c76d449-mtwzd" event={"ID":"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf","Type":"ContainerDied","Data":"0fe8cc2ff85f09e05318581d4516d9956824f119e043d68a882c1f60cf68181d"} Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.595958 4730 generic.go:334] "Generic (PLEG): container finished" podID="9b9e6ee1-bfce-461b-a098-9444b2203023" containerID="8856ad3e63483d5d17690f9e62ff5b4ab4e19d62586306096b1fe1d2b5cdccf5" exitCode=0 Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.596153 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fff66767-t7tcb" event={"ID":"9b9e6ee1-bfce-461b-a098-9444b2203023","Type":"ContainerDied","Data":"8856ad3e63483d5d17690f9e62ff5b4ab4e19d62586306096b1fe1d2b5cdccf5"} Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.602178 4730 generic.go:334] "Generic (PLEG): container finished" podID="f0d3583d-f56f-4f4b-87cb-e748976d47f6" containerID="256a4a8f260726f452ebad1150c68e0fa7a874c4e29cda9989e22e6bfb148659" exitCode=0 Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.602202 4730 generic.go:334] "Generic (PLEG): container finished" podID="f0d3583d-f56f-4f4b-87cb-e748976d47f6" containerID="b9f9052e94db5bc04e100f5fe656802b63b022bfdcfb7ba3c44aa0250a6d30b9" exitCode=2 Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.602280 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0d3583d-f56f-4f4b-87cb-e748976d47f6","Type":"ContainerDied","Data":"256a4a8f260726f452ebad1150c68e0fa7a874c4e29cda9989e22e6bfb148659"} Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.602339 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0d3583d-f56f-4f4b-87cb-e748976d47f6","Type":"ContainerDied","Data":"b9f9052e94db5bc04e100f5fe656802b63b022bfdcfb7ba3c44aa0250a6d30b9"} Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.608676 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c4d975ccf-jbdgk" event={"ID":"ce037144-daeb-412d-94f1-69bc4ed97935","Type":"ContainerStarted","Data":"2cfe363fef1de5060f96684421be736c96ad9374874a0526a31464bee4b2e28d"} Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.629418 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6f7c76d449-mtwzd" podUID="2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.158:9696/\": dial tcp 10.217.0.158:9696: connect: connection refused" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.629861 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69df784bcc-98p6s" event={"ID":"00791e2a-6f2b-450d-acab-1ac4b91656ea","Type":"ContainerDied","Data":"7e1eae4eecd4806690635b1764262ee02692da8fa85829dd9c5b7fee7fd59e65"} Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.629987 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69df784bcc-98p6s" Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.646103 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d649d8c65-rg8kd"] Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.754863 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69df784bcc-98p6s"] Jan 31 16:47:07 crc kubenswrapper[4730]: I0131 16:47:07.784990 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-69df784bcc-98p6s"] Jan 31 16:47:07 crc kubenswrapper[4730]: E0131 16:47:07.826648 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 16:47:08 crc kubenswrapper[4730]: I0131 16:47:08.067792 4730 scope.go:117] "RemoveContainer" containerID="86a7791b7f970b4c2e27e68a5edfed6a34033cd7cd6bd79d79246be431e08272" Jan 31 16:47:08 crc kubenswrapper[4730]: I0131 16:47:08.488836 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00791e2a-6f2b-450d-acab-1ac4b91656ea" path="/var/lib/kubelet/pods/00791e2a-6f2b-450d-acab-1ac4b91656ea/volumes" Jan 31 16:47:08 crc kubenswrapper[4730]: I0131 16:47:08.489635 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fadea706-a2c3-43dd-ba06-a43abab1f949" path="/var/lib/kubelet/pods/fadea706-a2c3-43dd-ba06-a43abab1f949/volumes" Jan 31 16:47:08 crc kubenswrapper[4730]: I0131 16:47:08.638244 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7567bc6486-x2ktx" event={"ID":"1f466831-6be5-42f8-85cc-a170c90ad516","Type":"ContainerStarted","Data":"db74e3e30b1a27c4e63ccec599b7ec7bcc126a183a076b578ac6ae06a9a6ca6d"} Jan 31 16:47:08 crc kubenswrapper[4730]: I0131 16:47:08.639677 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:47:08 crc kubenswrapper[4730]: I0131 16:47:08.639708 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:47:08 crc kubenswrapper[4730]: I0131 16:47:08.642448 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:47:08 crc kubenswrapper[4730]: I0131 16:47:08.642779 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72e56a4c-3291-45de-9dbe-da8f3ef14129","Type":"ContainerStarted","Data":"68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a"} Jan 31 16:47:08 crc kubenswrapper[4730]: I0131 16:47:08.642861 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:47:08 crc kubenswrapper[4730]: I0131 16:47:08.642928 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:47:08 crc kubenswrapper[4730]: I0131 16:47:08.668184 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7567bc6486-x2ktx" podStartSLOduration=13.66816755 podStartE2EDuration="13.66816755s" podCreationTimestamp="2026-01-31 16:46:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:47:08.655785547 +0000 UTC m=+1015.461842473" watchObservedRunningTime="2026-01-31 16:47:08.66816755 +0000 UTC m=+1015.474224466" Jan 31 16:47:08 crc kubenswrapper[4730]: I0131 16:47:08.677231 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-dc5f7996-jrfrx" podStartSLOduration=10.677214531 podStartE2EDuration="10.677214531s" podCreationTimestamp="2026-01-31 16:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:47:08.675137053 +0000 UTC m=+1015.481193999" watchObservedRunningTime="2026-01-31 16:47:08.677214531 +0000 UTC m=+1015.483271447" Jan 31 16:47:08 crc kubenswrapper[4730]: I0131 16:47:08.931053 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.020087 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-config\") pod \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.020865 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xmgw\" (UniqueName: \"kubernetes.io/projected/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-kube-api-access-8xmgw\") pod \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.020974 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-ovndb-tls-certs\") pod \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.021012 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-combined-ca-bundle\") pod \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.021028 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-internal-tls-certs\") pod \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.021045 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-public-tls-certs\") pod \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.021087 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-httpd-config\") pod \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\" (UID: \"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf\") " Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.026326 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" (UID: "2ddb310b-d8e7-4a4a-aac3-44298afdb0bf"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.034227 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-kube-api-access-8xmgw" (OuterVolumeSpecName: "kube-api-access-8xmgw") pod "2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" (UID: "2ddb310b-d8e7-4a4a-aac3-44298afdb0bf"). InnerVolumeSpecName "kube-api-access-8xmgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.088545 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" (UID: "2ddb310b-d8e7-4a4a-aac3-44298afdb0bf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.095989 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" (UID: "2ddb310b-d8e7-4a4a-aac3-44298afdb0bf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.105246 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" (UID: "2ddb310b-d8e7-4a4a-aac3-44298afdb0bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.111542 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-config" (OuterVolumeSpecName: "config") pod "2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" (UID: "2ddb310b-d8e7-4a4a-aac3-44298afdb0bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.126060 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.126201 4730 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.126256 4730 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.126307 4730 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.126367 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.126429 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xmgw\" (UniqueName: \"kubernetes.io/projected/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-kube-api-access-8xmgw\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.133786 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" (UID: "2ddb310b-d8e7-4a4a-aac3-44298afdb0bf"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.228181 4730 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.433453 4730 scope.go:117] "RemoveContainer" containerID="2dc6ce954598db57e2003a858ae5ba8949d40ef77652fb4e121d946900bfba08" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.652601 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f7c76d449-mtwzd" event={"ID":"2ddb310b-d8e7-4a4a-aac3-44298afdb0bf","Type":"ContainerDied","Data":"cf76ca416dca00faec9f3f2a189324c21adad4e3255c78f98d1191dd1103add1"} Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.652670 4730 scope.go:117] "RemoveContainer" containerID="233ceb1cdebc0314f0aa2c4b072811d20f666c035ea555f97792170c01fefd77" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.652689 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f7c76d449-mtwzd" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.702668 4730 scope.go:117] "RemoveContainer" containerID="0fe8cc2ff85f09e05318581d4516d9956824f119e043d68a882c1f60cf68181d" Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.715618 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f7c76d449-mtwzd"] Jan 31 16:47:09 crc kubenswrapper[4730]: I0131 16:47:09.721240 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6f7c76d449-mtwzd"] Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.479030 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" path="/var/lib/kubelet/pods/2ddb310b-d8e7-4a4a-aac3-44298afdb0bf/volumes" Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.668472 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" event={"ID":"73aa808b-e690-4e00-b458-4d30965fe1f8","Type":"ContainerStarted","Data":"534c6d5672ef391c24134a6efed50b8553101f9c8144b5b7334540832f8bf147"} Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.668512 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" event={"ID":"73aa808b-e690-4e00-b458-4d30965fe1f8","Type":"ContainerStarted","Data":"35a9fc7a85b5127e9a341c86425c3d0fb363a6e6eb3179ae6997ad8d29aebd26"} Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.671066 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8e61373e-9345-4a2a-a252-15b10ed8ae59","Type":"ContainerStarted","Data":"e620e32ed19e4fbd5b3de95ab357dd2d5b3ab980414856192ef08454dcd59f7d"} Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.673200 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-74c8bcbdc9-xg47w" event={"ID":"de24c449-9dfc-4e52-b571-ce305a73a1a7","Type":"ContainerStarted","Data":"95278df7ff67d22448efb681ef55c90815ec49ec6beaea41d8b0c40a39265e7b"} Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.673254 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-74c8bcbdc9-xg47w" event={"ID":"de24c449-9dfc-4e52-b571-ce305a73a1a7","Type":"ContainerStarted","Data":"0ea6abaa2031ee3bc3222c281017ba606aa679142ee0e2a3b1eb3905186cd92b"} Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.674892 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c4d975ccf-jbdgk" event={"ID":"ce037144-daeb-412d-94f1-69bc4ed97935","Type":"ContainerStarted","Data":"121d3620dfc2837f919a1ce91fb7b81a3ff8df40b32c6dbaf64968b617c8adf0"} Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.675180 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.688865 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="435927c74b967706fe7ebdbf1eac2e63fbd02dfb571e581ab2e5e21f1b4671f8" exitCode=1 Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.688925 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"435927c74b967706fe7ebdbf1eac2e63fbd02dfb571e581ab2e5e21f1b4671f8"} Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.689626 4730 scope.go:117] "RemoveContainer" containerID="a3b9aa96106c040897ae7759c8c7e37b4c35ba48dfb1207fcdec7d8f7b5bd348" Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.689683 4730 scope.go:117] "RemoveContainer" containerID="1c717feb04948860ffe61e8e59ace1903fbec0985f999c6eca36640a682381f5" Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.689762 4730 scope.go:117] "RemoveContainer" containerID="435927c74b967706fe7ebdbf1eac2e63fbd02dfb571e581ab2e5e21f1b4671f8" Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.689787 4730 scope.go:117] "RemoveContainer" containerID="4447520ba8817b50d0ba6b0a6b8a105c7d93b20b57b9de2f464b92326c6a1549" Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.709329 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72e56a4c-3291-45de-9dbe-da8f3ef14129","Type":"ContainerStarted","Data":"bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f"} Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.709489 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="72e56a4c-3291-45de-9dbe-da8f3ef14129" containerName="cinder-api-log" containerID="cri-o://68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a" gracePeriod=30 Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.709843 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.709872 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="72e56a4c-3291-45de-9dbe-da8f3ef14129" containerName="cinder-api" containerID="cri-o://bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f" gracePeriod=30 Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.717380 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c4d975ccf-jbdgk" podStartSLOduration=6.717361568 podStartE2EDuration="6.717361568s" podCreationTimestamp="2026-01-31 16:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:47:10.71198941 +0000 UTC m=+1017.518046326" watchObservedRunningTime="2026-01-31 16:47:10.717361568 +0000 UTC m=+1017.523418484" Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.717947 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7ffbbc76b4-9vr9z" podStartSLOduration=11.380432637 podStartE2EDuration="15.717942754s" podCreationTimestamp="2026-01-31 16:46:55 +0000 UTC" firstStartedPulling="2026-01-31 16:47:05.21092273 +0000 UTC m=+1012.016979636" lastFinishedPulling="2026-01-31 16:47:09.548432827 +0000 UTC m=+1016.354489753" observedRunningTime="2026-01-31 16:47:10.692625143 +0000 UTC m=+1017.498682069" watchObservedRunningTime="2026-01-31 16:47:10.717942754 +0000 UTC m=+1017.523999670" Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.748957 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-74c8bcbdc9-xg47w" podStartSLOduration=11.395529135 podStartE2EDuration="15.748943103s" podCreationTimestamp="2026-01-31 16:46:55 +0000 UTC" firstStartedPulling="2026-01-31 16:47:05.197280242 +0000 UTC m=+1012.003337158" lastFinishedPulling="2026-01-31 16:47:09.5506942 +0000 UTC m=+1016.356751126" observedRunningTime="2026-01-31 16:47:10.747870323 +0000 UTC m=+1017.553927239" watchObservedRunningTime="2026-01-31 16:47:10.748943103 +0000 UTC m=+1017.555000019" Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.753473 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fff66767-t7tcb" event={"ID":"9b9e6ee1-bfce-461b-a098-9444b2203023","Type":"ContainerStarted","Data":"ca4662245748d806e44bdc442bff7af124f6f0f5ec88ecf612f80fa1b09a814f"} Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.753522 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.830609 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57fff66767-t7tcb" podStartSLOduration=7.8305952340000005 podStartE2EDuration="7.830595234s" podCreationTimestamp="2026-01-31 16:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:47:10.828329891 +0000 UTC m=+1017.634386807" watchObservedRunningTime="2026-01-31 16:47:10.830595234 +0000 UTC m=+1017.636652140" Jan 31 16:47:10 crc kubenswrapper[4730]: I0131 16:47:10.875791 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.875761125 podStartE2EDuration="7.875761125s" podCreationTimestamp="2026-01-31 16:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:47:10.873130762 +0000 UTC m=+1017.679187678" watchObservedRunningTime="2026-01-31 16:47:10.875761125 +0000 UTC m=+1017.681818041" Jan 31 16:47:11 crc kubenswrapper[4730]: E0131 16:47:11.056102 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.653437 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.762600 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8e61373e-9345-4a2a-a252-15b10ed8ae59","Type":"ContainerStarted","Data":"68ed81ba62ed2dd28975adb4c094d623d873764cf26436510403551c35903fbd"} Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.767822 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"1f3360e1f421204b7af9c6c32dc9ed3f548543f1cce4369ddb18b1d85fdb6ad2"} Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.768525 4730 scope.go:117] "RemoveContainer" containerID="a3b9aa96106c040897ae7759c8c7e37b4c35ba48dfb1207fcdec7d8f7b5bd348" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.768585 4730 scope.go:117] "RemoveContainer" containerID="1c717feb04948860ffe61e8e59ace1903fbec0985f999c6eca36640a682381f5" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.768669 4730 scope.go:117] "RemoveContainer" containerID="4447520ba8817b50d0ba6b0a6b8a105c7d93b20b57b9de2f464b92326c6a1549" Jan 31 16:47:11 crc kubenswrapper[4730]: E0131 16:47:11.768964 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.769058 4730 generic.go:334] "Generic (PLEG): container finished" podID="72e56a4c-3291-45de-9dbe-da8f3ef14129" containerID="bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f" exitCode=0 Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.769084 4730 generic.go:334] "Generic (PLEG): container finished" podID="72e56a4c-3291-45de-9dbe-da8f3ef14129" containerID="68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a" exitCode=143 Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.769205 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72e56a4c-3291-45de-9dbe-da8f3ef14129","Type":"ContainerDied","Data":"bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f"} Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.769231 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72e56a4c-3291-45de-9dbe-da8f3ef14129","Type":"ContainerDied","Data":"68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a"} Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.769242 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72e56a4c-3291-45de-9dbe-da8f3ef14129","Type":"ContainerDied","Data":"acea7a41c528051854271c65b0e274b064e1267b9c59247a4b8cf64fb1664ed7"} Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.769255 4730 scope.go:117] "RemoveContainer" containerID="bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.769354 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.784477 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.259184314 podStartE2EDuration="8.784462459s" podCreationTimestamp="2026-01-31 16:47:03 +0000 UTC" firstStartedPulling="2026-01-31 16:47:06.023107341 +0000 UTC m=+1012.829164257" lastFinishedPulling="2026-01-31 16:47:09.548385486 +0000 UTC m=+1016.354442402" observedRunningTime="2026-01-31 16:47:11.779225314 +0000 UTC m=+1018.585282230" watchObservedRunningTime="2026-01-31 16:47:11.784462459 +0000 UTC m=+1018.590519375" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.794922 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-config-data\") pod \"72e56a4c-3291-45de-9dbe-da8f3ef14129\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.794981 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72e56a4c-3291-45de-9dbe-da8f3ef14129-etc-machine-id\") pod \"72e56a4c-3291-45de-9dbe-da8f3ef14129\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.795067 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-config-data-custom\") pod \"72e56a4c-3291-45de-9dbe-da8f3ef14129\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.795136 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72e56a4c-3291-45de-9dbe-da8f3ef14129-logs\") pod \"72e56a4c-3291-45de-9dbe-da8f3ef14129\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.795188 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-combined-ca-bundle\") pod \"72e56a4c-3291-45de-9dbe-da8f3ef14129\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.795210 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-scripts\") pod \"72e56a4c-3291-45de-9dbe-da8f3ef14129\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.795279 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thbfs\" (UniqueName: \"kubernetes.io/projected/72e56a4c-3291-45de-9dbe-da8f3ef14129-kube-api-access-thbfs\") pod \"72e56a4c-3291-45de-9dbe-da8f3ef14129\" (UID: \"72e56a4c-3291-45de-9dbe-da8f3ef14129\") " Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.797973 4730 scope.go:117] "RemoveContainer" containerID="68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.802239 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72e56a4c-3291-45de-9dbe-da8f3ef14129-logs" (OuterVolumeSpecName: "logs") pod "72e56a4c-3291-45de-9dbe-da8f3ef14129" (UID: "72e56a4c-3291-45de-9dbe-da8f3ef14129"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.802617 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72e56a4c-3291-45de-9dbe-da8f3ef14129-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "72e56a4c-3291-45de-9dbe-da8f3ef14129" (UID: "72e56a4c-3291-45de-9dbe-da8f3ef14129"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.812548 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72e56a4c-3291-45de-9dbe-da8f3ef14129-kube-api-access-thbfs" (OuterVolumeSpecName: "kube-api-access-thbfs") pod "72e56a4c-3291-45de-9dbe-da8f3ef14129" (UID: "72e56a4c-3291-45de-9dbe-da8f3ef14129"). InnerVolumeSpecName "kube-api-access-thbfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.812925 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "72e56a4c-3291-45de-9dbe-da8f3ef14129" (UID: "72e56a4c-3291-45de-9dbe-da8f3ef14129"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.814917 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-scripts" (OuterVolumeSpecName: "scripts") pod "72e56a4c-3291-45de-9dbe-da8f3ef14129" (UID: "72e56a4c-3291-45de-9dbe-da8f3ef14129"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.826846 4730 scope.go:117] "RemoveContainer" containerID="bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f" Jan 31 16:47:11 crc kubenswrapper[4730]: E0131 16:47:11.830888 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f\": container with ID starting with bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f not found: ID does not exist" containerID="bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.830920 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f"} err="failed to get container status \"bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f\": rpc error: code = NotFound desc = could not find container \"bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f\": container with ID starting with bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f not found: ID does not exist" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.830940 4730 scope.go:117] "RemoveContainer" containerID="68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a" Jan 31 16:47:11 crc kubenswrapper[4730]: E0131 16:47:11.834145 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a\": container with ID starting with 68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a not found: ID does not exist" containerID="68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.834171 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a"} err="failed to get container status \"68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a\": rpc error: code = NotFound desc = could not find container \"68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a\": container with ID starting with 68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a not found: ID does not exist" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.834188 4730 scope.go:117] "RemoveContainer" containerID="bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.837674 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f"} err="failed to get container status \"bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f\": rpc error: code = NotFound desc = could not find container \"bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f\": container with ID starting with bf8d8b0dd7fead69e2fc4bd938a6195148632c4ef969f1dcd3e2c6e1e5a10e8f not found: ID does not exist" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.837697 4730 scope.go:117] "RemoveContainer" containerID="68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.844932 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a"} err="failed to get container status \"68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a\": rpc error: code = NotFound desc = could not find container \"68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a\": container with ID starting with 68ceea17db02c09904b7ae0a963818de69cc901960466f2b14d97380906cda0a not found: ID does not exist" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.854931 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72e56a4c-3291-45de-9dbe-da8f3ef14129" (UID: "72e56a4c-3291-45de-9dbe-da8f3ef14129"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.900175 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thbfs\" (UniqueName: \"kubernetes.io/projected/72e56a4c-3291-45de-9dbe-da8f3ef14129-kube-api-access-thbfs\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.900220 4730 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72e56a4c-3291-45de-9dbe-da8f3ef14129-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.900231 4730 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.900241 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72e56a4c-3291-45de-9dbe-da8f3ef14129-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.900249 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.900256 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:11 crc kubenswrapper[4730]: I0131 16:47:11.907678 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-config-data" (OuterVolumeSpecName: "config-data") pod "72e56a4c-3291-45de-9dbe-da8f3ef14129" (UID: "72e56a4c-3291-45de-9dbe-da8f3ef14129"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.001869 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e56a4c-3291-45de-9dbe-da8f3ef14129-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.163824 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.191994 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.201695 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 16:47:12 crc kubenswrapper[4730]: E0131 16:47:12.202173 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" containerName="neutron-httpd" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.202232 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" containerName="neutron-httpd" Jan 31 16:47:12 crc kubenswrapper[4730]: E0131 16:47:12.202248 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fadea706-a2c3-43dd-ba06-a43abab1f949" containerName="init" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.202255 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fadea706-a2c3-43dd-ba06-a43abab1f949" containerName="init" Jan 31 16:47:12 crc kubenswrapper[4730]: E0131 16:47:12.202264 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00791e2a-6f2b-450d-acab-1ac4b91656ea" containerName="horizon" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.202270 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="00791e2a-6f2b-450d-acab-1ac4b91656ea" containerName="horizon" Jan 31 16:47:12 crc kubenswrapper[4730]: E0131 16:47:12.202313 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e56a4c-3291-45de-9dbe-da8f3ef14129" containerName="cinder-api-log" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.202321 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e56a4c-3291-45de-9dbe-da8f3ef14129" containerName="cinder-api-log" Jan 31 16:47:12 crc kubenswrapper[4730]: E0131 16:47:12.202330 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" containerName="neutron-api" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.202336 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" containerName="neutron-api" Jan 31 16:47:12 crc kubenswrapper[4730]: E0131 16:47:12.202351 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00791e2a-6f2b-450d-acab-1ac4b91656ea" containerName="horizon-log" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.202357 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="00791e2a-6f2b-450d-acab-1ac4b91656ea" containerName="horizon-log" Jan 31 16:47:12 crc kubenswrapper[4730]: E0131 16:47:12.202365 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e56a4c-3291-45de-9dbe-da8f3ef14129" containerName="cinder-api" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.202393 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e56a4c-3291-45de-9dbe-da8f3ef14129" containerName="cinder-api" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.202651 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="fadea706-a2c3-43dd-ba06-a43abab1f949" containerName="init" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.202660 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="72e56a4c-3291-45de-9dbe-da8f3ef14129" containerName="cinder-api" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.202668 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="72e56a4c-3291-45de-9dbe-da8f3ef14129" containerName="cinder-api-log" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.202680 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="00791e2a-6f2b-450d-acab-1ac4b91656ea" containerName="horizon-log" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.202711 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="00791e2a-6f2b-450d-acab-1ac4b91656ea" containerName="horizon" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.202718 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" containerName="neutron-httpd" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.202727 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ddb310b-d8e7-4a4a-aac3-44298afdb0bf" containerName="neutron-api" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.210078 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.216059 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.216491 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.216690 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.222348 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.262697 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.309912 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.310001 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-config-data-custom\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.310022 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-scripts\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.310079 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.310109 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-logs\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.310126 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.310142 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.310158 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-config-data\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.310172 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfzqp\" (UniqueName: \"kubernetes.io/projected/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-kube-api-access-jfzqp\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.411488 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-combined-ca-bundle\") pod \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.411563 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0d3583d-f56f-4f4b-87cb-e748976d47f6-run-httpd\") pod \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.411603 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-sg-core-conf-yaml\") pod \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.411623 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-config-data\") pod \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.411655 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-scripts\") pod \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.411673 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0d3583d-f56f-4f4b-87cb-e748976d47f6-log-httpd\") pod \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.411706 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87zjb\" (UniqueName: \"kubernetes.io/projected/f0d3583d-f56f-4f4b-87cb-e748976d47f6-kube-api-access-87zjb\") pod \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\" (UID: \"f0d3583d-f56f-4f4b-87cb-e748976d47f6\") " Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.411919 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.411958 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-logs\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.411975 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.411994 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.412013 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-config-data\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.412035 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfzqp\" (UniqueName: \"kubernetes.io/projected/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-kube-api-access-jfzqp\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.412080 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.412138 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-config-data-custom\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.412155 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-scripts\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.414474 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0d3583d-f56f-4f4b-87cb-e748976d47f6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f0d3583d-f56f-4f4b-87cb-e748976d47f6" (UID: "f0d3583d-f56f-4f4b-87cb-e748976d47f6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.416372 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.416688 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-logs\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.417195 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0d3583d-f56f-4f4b-87cb-e748976d47f6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f0d3583d-f56f-4f4b-87cb-e748976d47f6" (UID: "f0d3583d-f56f-4f4b-87cb-e748976d47f6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.422227 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-scripts\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.422696 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.423918 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.431902 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-scripts" (OuterVolumeSpecName: "scripts") pod "f0d3583d-f56f-4f4b-87cb-e748976d47f6" (UID: "f0d3583d-f56f-4f4b-87cb-e748976d47f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.435026 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.435157 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0d3583d-f56f-4f4b-87cb-e748976d47f6-kube-api-access-87zjb" (OuterVolumeSpecName: "kube-api-access-87zjb") pod "f0d3583d-f56f-4f4b-87cb-e748976d47f6" (UID: "f0d3583d-f56f-4f4b-87cb-e748976d47f6"). InnerVolumeSpecName "kube-api-access-87zjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.435852 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-config-data\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.436352 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-config-data-custom\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.437953 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfzqp\" (UniqueName: \"kubernetes.io/projected/fb708c6f-d3c0-4b3c-a4d9-48b759f11153-kube-api-access-jfzqp\") pod \"cinder-api-0\" (UID: \"fb708c6f-d3c0-4b3c-a4d9-48b759f11153\") " pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.475247 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f0d3583d-f56f-4f4b-87cb-e748976d47f6" (UID: "f0d3583d-f56f-4f4b-87cb-e748976d47f6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.479100 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72e56a4c-3291-45de-9dbe-da8f3ef14129" path="/var/lib/kubelet/pods/72e56a4c-3291-45de-9dbe-da8f3ef14129/volumes" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.488529 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0d3583d-f56f-4f4b-87cb-e748976d47f6" (UID: "f0d3583d-f56f-4f4b-87cb-e748976d47f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.498380 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-config-data" (OuterVolumeSpecName: "config-data") pod "f0d3583d-f56f-4f4b-87cb-e748976d47f6" (UID: "f0d3583d-f56f-4f4b-87cb-e748976d47f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.513407 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.513435 4730 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0d3583d-f56f-4f4b-87cb-e748976d47f6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.513444 4730 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.513452 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.513460 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0d3583d-f56f-4f4b-87cb-e748976d47f6-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.513470 4730 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0d3583d-f56f-4f4b-87cb-e748976d47f6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.513479 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87zjb\" (UniqueName: \"kubernetes.io/projected/f0d3583d-f56f-4f4b-87cb-e748976d47f6-kube-api-access-87zjb\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.575745 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.786429 4730 generic.go:334] "Generic (PLEG): container finished" podID="f0d3583d-f56f-4f4b-87cb-e748976d47f6" containerID="6dc2a926bdd3745332bd1dcaf7a8d96116f53c861159a769b35fe63153b415e7" exitCode=0 Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.786664 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0d3583d-f56f-4f4b-87cb-e748976d47f6","Type":"ContainerDied","Data":"6dc2a926bdd3745332bd1dcaf7a8d96116f53c861159a769b35fe63153b415e7"} Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.786855 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0d3583d-f56f-4f4b-87cb-e748976d47f6","Type":"ContainerDied","Data":"c0574d423338aeba52c57796ec24f2aee86ea7ca73766688662e346ebbf923f4"} Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.786878 4730 scope.go:117] "RemoveContainer" containerID="256a4a8f260726f452ebad1150c68e0fa7a874c4e29cda9989e22e6bfb148659" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.786714 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.825846 4730 scope.go:117] "RemoveContainer" containerID="b9f9052e94db5bc04e100f5fe656802b63b022bfdcfb7ba3c44aa0250a6d30b9" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.891685 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.898522 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.905204 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:47:12 crc kubenswrapper[4730]: E0131 16:47:12.905613 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d3583d-f56f-4f4b-87cb-e748976d47f6" containerName="ceilometer-notification-agent" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.905623 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d3583d-f56f-4f4b-87cb-e748976d47f6" containerName="ceilometer-notification-agent" Jan 31 16:47:12 crc kubenswrapper[4730]: E0131 16:47:12.905639 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d3583d-f56f-4f4b-87cb-e748976d47f6" containerName="proxy-httpd" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.905645 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d3583d-f56f-4f4b-87cb-e748976d47f6" containerName="proxy-httpd" Jan 31 16:47:12 crc kubenswrapper[4730]: E0131 16:47:12.905657 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d3583d-f56f-4f4b-87cb-e748976d47f6" containerName="sg-core" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.905663 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d3583d-f56f-4f4b-87cb-e748976d47f6" containerName="sg-core" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.905855 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0d3583d-f56f-4f4b-87cb-e748976d47f6" containerName="sg-core" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.905869 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0d3583d-f56f-4f4b-87cb-e748976d47f6" containerName="ceilometer-notification-agent" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.905888 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0d3583d-f56f-4f4b-87cb-e748976d47f6" containerName="proxy-httpd" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.906160 4730 scope.go:117] "RemoveContainer" containerID="6dc2a926bdd3745332bd1dcaf7a8d96116f53c861159a769b35fe63153b415e7" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.914540 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.914711 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.921431 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.921620 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.923228 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:47:12 crc kubenswrapper[4730]: E0131 16:47:12.924399 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 16:47:12 crc kubenswrapper[4730]: E0131 16:47:12.924464 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 16:49:14.924446618 +0000 UTC m=+1141.730503534 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.951881 4730 scope.go:117] "RemoveContainer" containerID="256a4a8f260726f452ebad1150c68e0fa7a874c4e29cda9989e22e6bfb148659" Jan 31 16:47:12 crc kubenswrapper[4730]: E0131 16:47:12.952285 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"256a4a8f260726f452ebad1150c68e0fa7a874c4e29cda9989e22e6bfb148659\": container with ID starting with 256a4a8f260726f452ebad1150c68e0fa7a874c4e29cda9989e22e6bfb148659 not found: ID does not exist" containerID="256a4a8f260726f452ebad1150c68e0fa7a874c4e29cda9989e22e6bfb148659" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.952315 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"256a4a8f260726f452ebad1150c68e0fa7a874c4e29cda9989e22e6bfb148659"} err="failed to get container status \"256a4a8f260726f452ebad1150c68e0fa7a874c4e29cda9989e22e6bfb148659\": rpc error: code = NotFound desc = could not find container \"256a4a8f260726f452ebad1150c68e0fa7a874c4e29cda9989e22e6bfb148659\": container with ID starting with 256a4a8f260726f452ebad1150c68e0fa7a874c4e29cda9989e22e6bfb148659 not found: ID does not exist" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.952336 4730 scope.go:117] "RemoveContainer" containerID="b9f9052e94db5bc04e100f5fe656802b63b022bfdcfb7ba3c44aa0250a6d30b9" Jan 31 16:47:12 crc kubenswrapper[4730]: E0131 16:47:12.952661 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f9052e94db5bc04e100f5fe656802b63b022bfdcfb7ba3c44aa0250a6d30b9\": container with ID starting with b9f9052e94db5bc04e100f5fe656802b63b022bfdcfb7ba3c44aa0250a6d30b9 not found: ID does not exist" containerID="b9f9052e94db5bc04e100f5fe656802b63b022bfdcfb7ba3c44aa0250a6d30b9" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.952682 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f9052e94db5bc04e100f5fe656802b63b022bfdcfb7ba3c44aa0250a6d30b9"} err="failed to get container status \"b9f9052e94db5bc04e100f5fe656802b63b022bfdcfb7ba3c44aa0250a6d30b9\": rpc error: code = NotFound desc = could not find container \"b9f9052e94db5bc04e100f5fe656802b63b022bfdcfb7ba3c44aa0250a6d30b9\": container with ID starting with b9f9052e94db5bc04e100f5fe656802b63b022bfdcfb7ba3c44aa0250a6d30b9 not found: ID does not exist" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.952699 4730 scope.go:117] "RemoveContainer" containerID="6dc2a926bdd3745332bd1dcaf7a8d96116f53c861159a769b35fe63153b415e7" Jan 31 16:47:12 crc kubenswrapper[4730]: E0131 16:47:12.953040 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dc2a926bdd3745332bd1dcaf7a8d96116f53c861159a769b35fe63153b415e7\": container with ID starting with 6dc2a926bdd3745332bd1dcaf7a8d96116f53c861159a769b35fe63153b415e7 not found: ID does not exist" containerID="6dc2a926bdd3745332bd1dcaf7a8d96116f53c861159a769b35fe63153b415e7" Jan 31 16:47:12 crc kubenswrapper[4730]: I0131 16:47:12.953082 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dc2a926bdd3745332bd1dcaf7a8d96116f53c861159a769b35fe63153b415e7"} err="failed to get container status \"6dc2a926bdd3745332bd1dcaf7a8d96116f53c861159a769b35fe63153b415e7\": rpc error: code = NotFound desc = could not find container \"6dc2a926bdd3745332bd1dcaf7a8d96116f53c861159a769b35fe63153b415e7\": container with ID starting with 6dc2a926bdd3745332bd1dcaf7a8d96116f53c861159a769b35fe63153b415e7 not found: ID does not exist" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.024487 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.024732 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-scripts\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.024873 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.024960 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9xqb\" (UniqueName: \"kubernetes.io/projected/877c4ba1-eb00-492d-8ef4-afef049a1e25-kube-api-access-c9xqb\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.025047 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/877c4ba1-eb00-492d-8ef4-afef049a1e25-log-httpd\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.025122 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-config-data\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.025190 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/877c4ba1-eb00-492d-8ef4-afef049a1e25-run-httpd\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.048194 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.126505 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.126567 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9xqb\" (UniqueName: \"kubernetes.io/projected/877c4ba1-eb00-492d-8ef4-afef049a1e25-kube-api-access-c9xqb\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.126610 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/877c4ba1-eb00-492d-8ef4-afef049a1e25-log-httpd\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.126632 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-config-data\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.126650 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/877c4ba1-eb00-492d-8ef4-afef049a1e25-run-httpd\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.126708 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.126725 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-scripts\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.127761 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/877c4ba1-eb00-492d-8ef4-afef049a1e25-run-httpd\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.127825 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/877c4ba1-eb00-492d-8ef4-afef049a1e25-log-httpd\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.135815 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.139533 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-scripts\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.143199 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.143511 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-config-data\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.148605 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9xqb\" (UniqueName: \"kubernetes.io/projected/877c4ba1-eb00-492d-8ef4-afef049a1e25-kube-api-access-c9xqb\") pod \"ceilometer-0\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.235396 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.718011 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:47:13 crc kubenswrapper[4730]: W0131 16:47:13.724506 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod877c4ba1_eb00_492d_8ef4_afef049a1e25.slice/crio-218e82bc8b78afb2cfa7a51c4e862e02f10a94e7d4cf384eb9cc3f90bfd18195 WatchSource:0}: Error finding container 218e82bc8b78afb2cfa7a51c4e862e02f10a94e7d4cf384eb9cc3f90bfd18195: Status 404 returned error can't find the container with id 218e82bc8b78afb2cfa7a51c4e862e02f10a94e7d4cf384eb9cc3f90bfd18195 Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.769670 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.799455 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb708c6f-d3c0-4b3c-a4d9-48b759f11153","Type":"ContainerStarted","Data":"61d77b65c02ee5fe23ef5bd33229e257aa0d460ea7e4ba86acc5fa328b5e5f34"} Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.800619 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"877c4ba1-eb00-492d-8ef4-afef049a1e25","Type":"ContainerStarted","Data":"218e82bc8b78afb2cfa7a51c4e862e02f10a94e7d4cf384eb9cc3f90bfd18195"} Jan 31 16:47:13 crc kubenswrapper[4730]: I0131 16:47:13.816756 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7567bc6486-x2ktx" podUID="1f466831-6be5-42f8-85cc-a170c90ad516" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 16:47:14 crc kubenswrapper[4730]: I0131 16:47:14.482262 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0d3583d-f56f-4f4b-87cb-e748976d47f6" path="/var/lib/kubelet/pods/f0d3583d-f56f-4f4b-87cb-e748976d47f6/volumes" Jan 31 16:47:14 crc kubenswrapper[4730]: I0131 16:47:14.823981 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb708c6f-d3c0-4b3c-a4d9-48b759f11153","Type":"ContainerStarted","Data":"5adf42ffdd1d4e4e4a27754cd75472a6e7b831059bd3cd89e6cb7acaf2e5949a"} Jan 31 16:47:14 crc kubenswrapper[4730]: I0131 16:47:14.826710 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"877c4ba1-eb00-492d-8ef4-afef049a1e25","Type":"ContainerStarted","Data":"3fa22c4d744601ff6674179694376a99c4fef6f9d54c771026c5598766e2a9ff"} Jan 31 16:47:15 crc kubenswrapper[4730]: I0131 16:47:15.838268 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb708c6f-d3c0-4b3c-a4d9-48b759f11153","Type":"ContainerStarted","Data":"703a033862540c93ecc3d44f75d5518f99a37b9752bf82aca1edd1285bb0a441"} Jan 31 16:47:15 crc kubenswrapper[4730]: I0131 16:47:15.838711 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 16:47:15 crc kubenswrapper[4730]: I0131 16:47:15.840920 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"877c4ba1-eb00-492d-8ef4-afef049a1e25","Type":"ContainerStarted","Data":"296b42a59e94cef411931555bddc305100a561217706090634d8c3fe6ce07a4a"} Jan 31 16:47:15 crc kubenswrapper[4730]: I0131 16:47:15.873061 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.8730438019999998 podStartE2EDuration="3.873043802s" podCreationTimestamp="2026-01-31 16:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:47:15.865062551 +0000 UTC m=+1022.671119457" watchObservedRunningTime="2026-01-31 16:47:15.873043802 +0000 UTC m=+1022.679100718" Jan 31 16:47:16 crc kubenswrapper[4730]: I0131 16:47:16.074347 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:47:16 crc kubenswrapper[4730]: I0131 16:47:16.502381 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-dc5f7996-jrfrx" Jan 31 16:47:16 crc kubenswrapper[4730]: I0131 16:47:16.559514 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7567bc6486-x2ktx"] Jan 31 16:47:16 crc kubenswrapper[4730]: I0131 16:47:16.559812 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7567bc6486-x2ktx" podUID="1f466831-6be5-42f8-85cc-a170c90ad516" containerName="barbican-api-log" containerID="cri-o://e26a31f0001a4fea1ac7d20702847e80592a1c92fc4ff9a177ad3ad6f7191596" gracePeriod=30 Jan 31 16:47:16 crc kubenswrapper[4730]: I0131 16:47:16.559961 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7567bc6486-x2ktx" podUID="1f466831-6be5-42f8-85cc-a170c90ad516" containerName="barbican-api" containerID="cri-o://db74e3e30b1a27c4e63ccec599b7ec7bcc126a183a076b578ac6ae06a9a6ca6d" gracePeriod=30 Jan 31 16:47:16 crc kubenswrapper[4730]: I0131 16:47:16.587313 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7567bc6486-x2ktx" podUID="1f466831-6be5-42f8-85cc-a170c90ad516" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": EOF" Jan 31 16:47:16 crc kubenswrapper[4730]: I0131 16:47:16.587361 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7567bc6486-x2ktx" podUID="1f466831-6be5-42f8-85cc-a170c90ad516" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": EOF" Jan 31 16:47:16 crc kubenswrapper[4730]: I0131 16:47:16.587595 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7567bc6486-x2ktx" podUID="1f466831-6be5-42f8-85cc-a170c90ad516" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": EOF" Jan 31 16:47:16 crc kubenswrapper[4730]: I0131 16:47:16.849970 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"877c4ba1-eb00-492d-8ef4-afef049a1e25","Type":"ContainerStarted","Data":"a2be787384af66bed63096f131082f64eaec12e23d326f16f0b0036499d0103b"} Jan 31 16:47:16 crc kubenswrapper[4730]: I0131 16:47:16.851681 4730 generic.go:334] "Generic (PLEG): container finished" podID="1f466831-6be5-42f8-85cc-a170c90ad516" containerID="e26a31f0001a4fea1ac7d20702847e80592a1c92fc4ff9a177ad3ad6f7191596" exitCode=143 Jan 31 16:47:16 crc kubenswrapper[4730]: I0131 16:47:16.851742 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7567bc6486-x2ktx" event={"ID":"1f466831-6be5-42f8-85cc-a170c90ad516","Type":"ContainerDied","Data":"e26a31f0001a4fea1ac7d20702847e80592a1c92fc4ff9a177ad3ad6f7191596"} Jan 31 16:47:18 crc kubenswrapper[4730]: I0131 16:47:18.866026 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:47:18 crc kubenswrapper[4730]: I0131 16:47:18.878161 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"877c4ba1-eb00-492d-8ef4-afef049a1e25","Type":"ContainerStarted","Data":"c07097b7fa261f34fdadae6ff4e82507af2c2a89feb0f9c5faa981e203395058"} Jan 31 16:47:18 crc kubenswrapper[4730]: I0131 16:47:18.878482 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 16:47:18 crc kubenswrapper[4730]: I0131 16:47:18.959529 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-4nwfb"] Jan 31 16:47:18 crc kubenswrapper[4730]: I0131 16:47:18.959819 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fb745b69-4nwfb" podUID="10c629d7-5578-4c73-bdd7-69b268cca700" containerName="dnsmasq-dns" containerID="cri-o://ce31c1f8e5f9a39995f032e00d2a30cd53f30b19d354a8b36caf1c07ff2b782b" gracePeriod=10 Jan 31 16:47:18 crc kubenswrapper[4730]: I0131 16:47:18.965921 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.796216683 podStartE2EDuration="6.965903193s" podCreationTimestamp="2026-01-31 16:47:12 +0000 UTC" firstStartedPulling="2026-01-31 16:47:13.727334812 +0000 UTC m=+1020.533391728" lastFinishedPulling="2026-01-31 16:47:17.897021322 +0000 UTC m=+1024.703078238" observedRunningTime="2026-01-31 16:47:18.954488137 +0000 UTC m=+1025.760545053" watchObservedRunningTime="2026-01-31 16:47:18.965903193 +0000 UTC m=+1025.771960109" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.187120 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.258947 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.662825 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.713463 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-config\") pod \"10c629d7-5578-4c73-bdd7-69b268cca700\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.713517 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-ovsdbserver-nb\") pod \"10c629d7-5578-4c73-bdd7-69b268cca700\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.713572 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzzs2\" (UniqueName: \"kubernetes.io/projected/10c629d7-5578-4c73-bdd7-69b268cca700-kube-api-access-rzzs2\") pod \"10c629d7-5578-4c73-bdd7-69b268cca700\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.713595 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-dns-svc\") pod \"10c629d7-5578-4c73-bdd7-69b268cca700\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.713640 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-ovsdbserver-sb\") pod \"10c629d7-5578-4c73-bdd7-69b268cca700\" (UID: \"10c629d7-5578-4c73-bdd7-69b268cca700\") " Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.727530 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c629d7-5578-4c73-bdd7-69b268cca700-kube-api-access-rzzs2" (OuterVolumeSpecName: "kube-api-access-rzzs2") pod "10c629d7-5578-4c73-bdd7-69b268cca700" (UID: "10c629d7-5578-4c73-bdd7-69b268cca700"). InnerVolumeSpecName "kube-api-access-rzzs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.800689 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "10c629d7-5578-4c73-bdd7-69b268cca700" (UID: "10c629d7-5578-4c73-bdd7-69b268cca700"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.801118 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-config" (OuterVolumeSpecName: "config") pod "10c629d7-5578-4c73-bdd7-69b268cca700" (UID: "10c629d7-5578-4c73-bdd7-69b268cca700"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.816036 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.816082 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzzs2\" (UniqueName: \"kubernetes.io/projected/10c629d7-5578-4c73-bdd7-69b268cca700-kube-api-access-rzzs2\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.816095 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.820247 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "10c629d7-5578-4c73-bdd7-69b268cca700" (UID: "10c629d7-5578-4c73-bdd7-69b268cca700"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.836365 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "10c629d7-5578-4c73-bdd7-69b268cca700" (UID: "10c629d7-5578-4c73-bdd7-69b268cca700"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.887250 4730 generic.go:334] "Generic (PLEG): container finished" podID="10c629d7-5578-4c73-bdd7-69b268cca700" containerID="ce31c1f8e5f9a39995f032e00d2a30cd53f30b19d354a8b36caf1c07ff2b782b" exitCode=0 Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.887462 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8e61373e-9345-4a2a-a252-15b10ed8ae59" containerName="cinder-scheduler" containerID="cri-o://e620e32ed19e4fbd5b3de95ab357dd2d5b3ab980414856192ef08454dcd59f7d" gracePeriod=30 Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.887762 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb745b69-4nwfb" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.889913 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-4nwfb" event={"ID":"10c629d7-5578-4c73-bdd7-69b268cca700","Type":"ContainerDied","Data":"ce31c1f8e5f9a39995f032e00d2a30cd53f30b19d354a8b36caf1c07ff2b782b"} Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.889963 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-4nwfb" event={"ID":"10c629d7-5578-4c73-bdd7-69b268cca700","Type":"ContainerDied","Data":"d0ff38d8f8f8d17d662c70c2dd568c621a85694f75bbb8f49f8a57469a8f847f"} Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.889979 4730 scope.go:117] "RemoveContainer" containerID="ce31c1f8e5f9a39995f032e00d2a30cd53f30b19d354a8b36caf1c07ff2b782b" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.890173 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8e61373e-9345-4a2a-a252-15b10ed8ae59" containerName="probe" containerID="cri-o://68ed81ba62ed2dd28975adb4c094d623d873764cf26436510403551c35903fbd" gracePeriod=30 Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.926638 4730 scope.go:117] "RemoveContainer" containerID="208bacecfa79d23c3a1006ba86f7bda4078ab6e4d5a66b18fb30aa1d7283f878" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.928488 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.928508 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10c629d7-5578-4c73-bdd7-69b268cca700-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.938792 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-4nwfb"] Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.942452 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-4nwfb"] Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.979079 4730 scope.go:117] "RemoveContainer" containerID="ce31c1f8e5f9a39995f032e00d2a30cd53f30b19d354a8b36caf1c07ff2b782b" Jan 31 16:47:19 crc kubenswrapper[4730]: E0131 16:47:19.982194 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce31c1f8e5f9a39995f032e00d2a30cd53f30b19d354a8b36caf1c07ff2b782b\": container with ID starting with ce31c1f8e5f9a39995f032e00d2a30cd53f30b19d354a8b36caf1c07ff2b782b not found: ID does not exist" containerID="ce31c1f8e5f9a39995f032e00d2a30cd53f30b19d354a8b36caf1c07ff2b782b" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.982225 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce31c1f8e5f9a39995f032e00d2a30cd53f30b19d354a8b36caf1c07ff2b782b"} err="failed to get container status \"ce31c1f8e5f9a39995f032e00d2a30cd53f30b19d354a8b36caf1c07ff2b782b\": rpc error: code = NotFound desc = could not find container \"ce31c1f8e5f9a39995f032e00d2a30cd53f30b19d354a8b36caf1c07ff2b782b\": container with ID starting with ce31c1f8e5f9a39995f032e00d2a30cd53f30b19d354a8b36caf1c07ff2b782b not found: ID does not exist" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.982248 4730 scope.go:117] "RemoveContainer" containerID="208bacecfa79d23c3a1006ba86f7bda4078ab6e4d5a66b18fb30aa1d7283f878" Jan 31 16:47:19 crc kubenswrapper[4730]: E0131 16:47:19.987974 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"208bacecfa79d23c3a1006ba86f7bda4078ab6e4d5a66b18fb30aa1d7283f878\": container with ID starting with 208bacecfa79d23c3a1006ba86f7bda4078ab6e4d5a66b18fb30aa1d7283f878 not found: ID does not exist" containerID="208bacecfa79d23c3a1006ba86f7bda4078ab6e4d5a66b18fb30aa1d7283f878" Jan 31 16:47:19 crc kubenswrapper[4730]: I0131 16:47:19.987997 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"208bacecfa79d23c3a1006ba86f7bda4078ab6e4d5a66b18fb30aa1d7283f878"} err="failed to get container status \"208bacecfa79d23c3a1006ba86f7bda4078ab6e4d5a66b18fb30aa1d7283f878\": rpc error: code = NotFound desc = could not find container \"208bacecfa79d23c3a1006ba86f7bda4078ab6e4d5a66b18fb30aa1d7283f878\": container with ID starting with 208bacecfa79d23c3a1006ba86f7bda4078ab6e4d5a66b18fb30aa1d7283f878 not found: ID does not exist" Jan 31 16:47:20 crc kubenswrapper[4730]: I0131 16:47:20.475070 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c629d7-5578-4c73-bdd7-69b268cca700" path="/var/lib/kubelet/pods/10c629d7-5578-4c73-bdd7-69b268cca700/volumes" Jan 31 16:47:20 crc kubenswrapper[4730]: I0131 16:47:20.896144 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e61373e-9345-4a2a-a252-15b10ed8ae59" containerID="68ed81ba62ed2dd28975adb4c094d623d873764cf26436510403551c35903fbd" exitCode=0 Jan 31 16:47:20 crc kubenswrapper[4730]: I0131 16:47:20.896379 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8e61373e-9345-4a2a-a252-15b10ed8ae59","Type":"ContainerDied","Data":"68ed81ba62ed2dd28975adb4c094d623d873764cf26436510403551c35903fbd"} Jan 31 16:47:21 crc kubenswrapper[4730]: I0131 16:47:21.272741 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-959768976-4n77c" Jan 31 16:47:21 crc kubenswrapper[4730]: I0131 16:47:21.286006 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-959768976-4n77c" Jan 31 16:47:21 crc kubenswrapper[4730]: I0131 16:47:21.371597 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:47:21 crc kubenswrapper[4730]: I0131 16:47:21.672020 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7567bc6486-x2ktx" podUID="1f466831-6be5-42f8-85cc-a170c90ad516" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 16:47:21 crc kubenswrapper[4730]: I0131 16:47:21.672479 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7567bc6486-x2ktx" podUID="1f466831-6be5-42f8-85cc-a170c90ad516" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 16:47:21 crc kubenswrapper[4730]: I0131 16:47:21.672847 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b6cc64d78-7m9cj" Jan 31 16:47:21 crc kubenswrapper[4730]: I0131 16:47:21.672888 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5b54468f66-vfdd4" Jan 31 16:47:21 crc kubenswrapper[4730]: I0131 16:47:21.753666 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-959768976-4n77c"] Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.116628 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7567bc6486-x2ktx" podUID="1f466831-6be5-42f8-85cc-a170c90ad516" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:41996->10.217.0.166:9311: read: connection reset by peer" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.117060 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7567bc6486-x2ktx" podUID="1f466831-6be5-42f8-85cc-a170c90ad516" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:42012->10.217.0.166:9311: read: connection reset by peer" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.554004 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.684606 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-config-data-custom\") pod \"1f466831-6be5-42f8-85cc-a170c90ad516\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.685737 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-combined-ca-bundle\") pod \"1f466831-6be5-42f8-85cc-a170c90ad516\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.685959 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v97mt\" (UniqueName: \"kubernetes.io/projected/1f466831-6be5-42f8-85cc-a170c90ad516-kube-api-access-v97mt\") pod \"1f466831-6be5-42f8-85cc-a170c90ad516\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.686134 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-config-data\") pod \"1f466831-6be5-42f8-85cc-a170c90ad516\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.686451 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f466831-6be5-42f8-85cc-a170c90ad516-logs\") pod \"1f466831-6be5-42f8-85cc-a170c90ad516\" (UID: \"1f466831-6be5-42f8-85cc-a170c90ad516\") " Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.687238 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f466831-6be5-42f8-85cc-a170c90ad516-logs" (OuterVolumeSpecName: "logs") pod "1f466831-6be5-42f8-85cc-a170c90ad516" (UID: "1f466831-6be5-42f8-85cc-a170c90ad516"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.691409 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1f466831-6be5-42f8-85cc-a170c90ad516" (UID: "1f466831-6be5-42f8-85cc-a170c90ad516"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.710152 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f466831-6be5-42f8-85cc-a170c90ad516-kube-api-access-v97mt" (OuterVolumeSpecName: "kube-api-access-v97mt") pod "1f466831-6be5-42f8-85cc-a170c90ad516" (UID: "1f466831-6be5-42f8-85cc-a170c90ad516"). InnerVolumeSpecName "kube-api-access-v97mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.735499 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f466831-6be5-42f8-85cc-a170c90ad516" (UID: "1f466831-6be5-42f8-85cc-a170c90ad516"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.744546 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-config-data" (OuterVolumeSpecName: "config-data") pod "1f466831-6be5-42f8-85cc-a170c90ad516" (UID: "1f466831-6be5-42f8-85cc-a170c90ad516"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.788874 4730 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.789045 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.789130 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v97mt\" (UniqueName: \"kubernetes.io/projected/1f466831-6be5-42f8-85cc-a170c90ad516-kube-api-access-v97mt\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.789192 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f466831-6be5-42f8-85cc-a170c90ad516-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.789246 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f466831-6be5-42f8-85cc-a170c90ad516-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.913777 4730 generic.go:334] "Generic (PLEG): container finished" podID="1f466831-6be5-42f8-85cc-a170c90ad516" containerID="db74e3e30b1a27c4e63ccec599b7ec7bcc126a183a076b578ac6ae06a9a6ca6d" exitCode=0 Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.913843 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7567bc6486-x2ktx" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.913852 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7567bc6486-x2ktx" event={"ID":"1f466831-6be5-42f8-85cc-a170c90ad516","Type":"ContainerDied","Data":"db74e3e30b1a27c4e63ccec599b7ec7bcc126a183a076b578ac6ae06a9a6ca6d"} Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.914130 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7567bc6486-x2ktx" event={"ID":"1f466831-6be5-42f8-85cc-a170c90ad516","Type":"ContainerDied","Data":"03d261bdb7b9ad2c931f4cfad3d37d7e73b910299458b30defff9ec723576308"} Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.914167 4730 scope.go:117] "RemoveContainer" containerID="db74e3e30b1a27c4e63ccec599b7ec7bcc126a183a076b578ac6ae06a9a6ca6d" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.914359 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-959768976-4n77c" podUID="cae13f89-c09f-4e59-b3e5-7de6b4562d17" containerName="placement-log" containerID="cri-o://636bd1a4f689eb8ea368a8d38ab04f75b9cd83f78eab14f2ef44f8420b69a5e0" gracePeriod=30 Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.914796 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-959768976-4n77c" podUID="cae13f89-c09f-4e59-b3e5-7de6b4562d17" containerName="placement-api" containerID="cri-o://d79fe29d6b7b2f5dc336ccb6c5559eb6a4d8556905b8aced6a92c14d58e83596" gracePeriod=30 Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.938315 4730 scope.go:117] "RemoveContainer" containerID="e26a31f0001a4fea1ac7d20702847e80592a1c92fc4ff9a177ad3ad6f7191596" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.951938 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7567bc6486-x2ktx"] Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.956719 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7567bc6486-x2ktx"] Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.963956 4730 scope.go:117] "RemoveContainer" containerID="db74e3e30b1a27c4e63ccec599b7ec7bcc126a183a076b578ac6ae06a9a6ca6d" Jan 31 16:47:22 crc kubenswrapper[4730]: E0131 16:47:22.964381 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db74e3e30b1a27c4e63ccec599b7ec7bcc126a183a076b578ac6ae06a9a6ca6d\": container with ID starting with db74e3e30b1a27c4e63ccec599b7ec7bcc126a183a076b578ac6ae06a9a6ca6d not found: ID does not exist" containerID="db74e3e30b1a27c4e63ccec599b7ec7bcc126a183a076b578ac6ae06a9a6ca6d" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.964415 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db74e3e30b1a27c4e63ccec599b7ec7bcc126a183a076b578ac6ae06a9a6ca6d"} err="failed to get container status \"db74e3e30b1a27c4e63ccec599b7ec7bcc126a183a076b578ac6ae06a9a6ca6d\": rpc error: code = NotFound desc = could not find container \"db74e3e30b1a27c4e63ccec599b7ec7bcc126a183a076b578ac6ae06a9a6ca6d\": container with ID starting with db74e3e30b1a27c4e63ccec599b7ec7bcc126a183a076b578ac6ae06a9a6ca6d not found: ID does not exist" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.964459 4730 scope.go:117] "RemoveContainer" containerID="e26a31f0001a4fea1ac7d20702847e80592a1c92fc4ff9a177ad3ad6f7191596" Jan 31 16:47:22 crc kubenswrapper[4730]: E0131 16:47:22.964939 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e26a31f0001a4fea1ac7d20702847e80592a1c92fc4ff9a177ad3ad6f7191596\": container with ID starting with e26a31f0001a4fea1ac7d20702847e80592a1c92fc4ff9a177ad3ad6f7191596 not found: ID does not exist" containerID="e26a31f0001a4fea1ac7d20702847e80592a1c92fc4ff9a177ad3ad6f7191596" Jan 31 16:47:22 crc kubenswrapper[4730]: I0131 16:47:22.965047 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e26a31f0001a4fea1ac7d20702847e80592a1c92fc4ff9a177ad3ad6f7191596"} err="failed to get container status \"e26a31f0001a4fea1ac7d20702847e80592a1c92fc4ff9a177ad3ad6f7191596\": rpc error: code = NotFound desc = could not find container \"e26a31f0001a4fea1ac7d20702847e80592a1c92fc4ff9a177ad3ad6f7191596\": container with ID starting with e26a31f0001a4fea1ac7d20702847e80592a1c92fc4ff9a177ad3ad6f7191596 not found: ID does not exist" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.524097 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 31 16:47:23 crc kubenswrapper[4730]: E0131 16:47:23.524421 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c629d7-5578-4c73-bdd7-69b268cca700" containerName="init" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.524432 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c629d7-5578-4c73-bdd7-69b268cca700" containerName="init" Jan 31 16:47:23 crc kubenswrapper[4730]: E0131 16:47:23.524457 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f466831-6be5-42f8-85cc-a170c90ad516" containerName="barbican-api" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.524464 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f466831-6be5-42f8-85cc-a170c90ad516" containerName="barbican-api" Jan 31 16:47:23 crc kubenswrapper[4730]: E0131 16:47:23.524480 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f466831-6be5-42f8-85cc-a170c90ad516" containerName="barbican-api-log" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.524485 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f466831-6be5-42f8-85cc-a170c90ad516" containerName="barbican-api-log" Jan 31 16:47:23 crc kubenswrapper[4730]: E0131 16:47:23.524497 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c629d7-5578-4c73-bdd7-69b268cca700" containerName="dnsmasq-dns" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.524502 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c629d7-5578-4c73-bdd7-69b268cca700" containerName="dnsmasq-dns" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.524651 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f466831-6be5-42f8-85cc-a170c90ad516" containerName="barbican-api" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.524664 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c629d7-5578-4c73-bdd7-69b268cca700" containerName="dnsmasq-dns" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.524681 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f466831-6be5-42f8-85cc-a170c90ad516" containerName="barbican-api-log" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.530036 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.553507 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.553665 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.553862 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-prdvb" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.560871 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.602984 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d-openstack-config-secret\") pod \"openstackclient\" (UID: \"7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d\") " pod="openstack/openstackclient" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.603206 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6bs4\" (UniqueName: \"kubernetes.io/projected/7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d-kube-api-access-r6bs4\") pod \"openstackclient\" (UID: \"7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d\") " pod="openstack/openstackclient" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.603358 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d\") " pod="openstack/openstackclient" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.603464 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d-openstack-config\") pod \"openstackclient\" (UID: \"7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d\") " pod="openstack/openstackclient" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.705487 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d-openstack-config\") pod \"openstackclient\" (UID: \"7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d\") " pod="openstack/openstackclient" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.705738 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d-openstack-config-secret\") pod \"openstackclient\" (UID: \"7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d\") " pod="openstack/openstackclient" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.705776 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6bs4\" (UniqueName: \"kubernetes.io/projected/7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d-kube-api-access-r6bs4\") pod \"openstackclient\" (UID: \"7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d\") " pod="openstack/openstackclient" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.705884 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d\") " pod="openstack/openstackclient" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.706343 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d-openstack-config\") pod \"openstackclient\" (UID: \"7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d\") " pod="openstack/openstackclient" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.710000 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d\") " pod="openstack/openstackclient" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.711193 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d-openstack-config-secret\") pod \"openstackclient\" (UID: \"7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d\") " pod="openstack/openstackclient" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.731453 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6bs4\" (UniqueName: \"kubernetes.io/projected/7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d-kube-api-access-r6bs4\") pod \"openstackclient\" (UID: \"7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d\") " pod="openstack/openstackclient" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.868190 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.924062 4730 generic.go:334] "Generic (PLEG): container finished" podID="cae13f89-c09f-4e59-b3e5-7de6b4562d17" containerID="636bd1a4f689eb8ea368a8d38ab04f75b9cd83f78eab14f2ef44f8420b69a5e0" exitCode=143 Jan 31 16:47:23 crc kubenswrapper[4730]: I0131 16:47:23.924143 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-959768976-4n77c" event={"ID":"cae13f89-c09f-4e59-b3e5-7de6b4562d17","Type":"ContainerDied","Data":"636bd1a4f689eb8ea368a8d38ab04f75b9cd83f78eab14f2ef44f8420b69a5e0"} Jan 31 16:47:24 crc kubenswrapper[4730]: I0131 16:47:24.454227 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 16:47:24 crc kubenswrapper[4730]: I0131 16:47:24.468032 4730 scope.go:117] "RemoveContainer" containerID="a3b9aa96106c040897ae7759c8c7e37b4c35ba48dfb1207fcdec7d8f7b5bd348" Jan 31 16:47:24 crc kubenswrapper[4730]: I0131 16:47:24.468112 4730 scope.go:117] "RemoveContainer" containerID="1c717feb04948860ffe61e8e59ace1903fbec0985f999c6eca36640a682381f5" Jan 31 16:47:24 crc kubenswrapper[4730]: I0131 16:47:24.468215 4730 scope.go:117] "RemoveContainer" containerID="4447520ba8817b50d0ba6b0a6b8a105c7d93b20b57b9de2f464b92326c6a1549" Jan 31 16:47:24 crc kubenswrapper[4730]: I0131 16:47:24.474906 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f466831-6be5-42f8-85cc-a170c90ad516" path="/var/lib/kubelet/pods/1f466831-6be5-42f8-85cc-a170c90ad516/volumes" Jan 31 16:47:24 crc kubenswrapper[4730]: I0131 16:47:24.683253 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 31 16:47:24 crc kubenswrapper[4730]: I0131 16:47:24.942411 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e61373e-9345-4a2a-a252-15b10ed8ae59" containerID="e620e32ed19e4fbd5b3de95ab357dd2d5b3ab980414856192ef08454dcd59f7d" exitCode=0 Jan 31 16:47:24 crc kubenswrapper[4730]: I0131 16:47:24.942464 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8e61373e-9345-4a2a-a252-15b10ed8ae59","Type":"ContainerDied","Data":"e620e32ed19e4fbd5b3de95ab357dd2d5b3ab980414856192ef08454dcd59f7d"} Jan 31 16:47:24 crc kubenswrapper[4730]: I0131 16:47:24.949140 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d","Type":"ContainerStarted","Data":"177df6cba3ca10cf898e2cf586485261863dd134e16ef80ae9fc486a7e18b025"} Jan 31 16:47:24 crc kubenswrapper[4730]: I0131 16:47:24.975676 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c"} Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.160314 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.237962 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-config-data-custom\") pod \"8e61373e-9345-4a2a-a252-15b10ed8ae59\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.238350 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-config-data\") pod \"8e61373e-9345-4a2a-a252-15b10ed8ae59\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.238392 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-combined-ca-bundle\") pod \"8e61373e-9345-4a2a-a252-15b10ed8ae59\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.238424 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e61373e-9345-4a2a-a252-15b10ed8ae59-etc-machine-id\") pod \"8e61373e-9345-4a2a-a252-15b10ed8ae59\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.238450 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-scripts\") pod \"8e61373e-9345-4a2a-a252-15b10ed8ae59\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.238508 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntlbv\" (UniqueName: \"kubernetes.io/projected/8e61373e-9345-4a2a-a252-15b10ed8ae59-kube-api-access-ntlbv\") pod \"8e61373e-9345-4a2a-a252-15b10ed8ae59\" (UID: \"8e61373e-9345-4a2a-a252-15b10ed8ae59\") " Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.242004 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e61373e-9345-4a2a-a252-15b10ed8ae59-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8e61373e-9345-4a2a-a252-15b10ed8ae59" (UID: "8e61373e-9345-4a2a-a252-15b10ed8ae59"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.247965 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8e61373e-9345-4a2a-a252-15b10ed8ae59" (UID: "8e61373e-9345-4a2a-a252-15b10ed8ae59"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.255544 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e61373e-9345-4a2a-a252-15b10ed8ae59-kube-api-access-ntlbv" (OuterVolumeSpecName: "kube-api-access-ntlbv") pod "8e61373e-9345-4a2a-a252-15b10ed8ae59" (UID: "8e61373e-9345-4a2a-a252-15b10ed8ae59"). InnerVolumeSpecName "kube-api-access-ntlbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.296082 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-scripts" (OuterVolumeSpecName: "scripts") pod "8e61373e-9345-4a2a-a252-15b10ed8ae59" (UID: "8e61373e-9345-4a2a-a252-15b10ed8ae59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.340415 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntlbv\" (UniqueName: \"kubernetes.io/projected/8e61373e-9345-4a2a-a252-15b10ed8ae59-kube-api-access-ntlbv\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.340441 4730 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.340450 4730 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e61373e-9345-4a2a-a252-15b10ed8ae59-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.340457 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.362010 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e61373e-9345-4a2a-a252-15b10ed8ae59" (UID: "8e61373e-9345-4a2a-a252-15b10ed8ae59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.428907 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-config-data" (OuterVolumeSpecName: "config-data") pod "8e61373e-9345-4a2a-a252-15b10ed8ae59" (UID: "8e61373e-9345-4a2a-a252-15b10ed8ae59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.442115 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:25 crc kubenswrapper[4730]: I0131 16:47:25.442143 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e61373e-9345-4a2a-a252-15b10ed8ae59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.003749 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8e61373e-9345-4a2a-a252-15b10ed8ae59","Type":"ContainerDied","Data":"d684d350362a86a8a895e1f5b9f51e0a2e37ecbf4160f0454b727f14afc23035"} Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.003842 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.003850 4730 scope.go:117] "RemoveContainer" containerID="68ed81ba62ed2dd28975adb4c094d623d873764cf26436510403551c35903fbd" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.025462 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c" exitCode=1 Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.025489 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4" exitCode=1 Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.025511 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c"} Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.025538 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff"} Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.025548 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4"} Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.026005 4730 scope.go:117] "RemoveContainer" containerID="e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.026077 4730 scope.go:117] "RemoveContainer" containerID="fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4" Jan 31 16:47:26 crc kubenswrapper[4730]: E0131 16:47:26.026564 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.041868 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.069142 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.077887 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 16:47:26 crc kubenswrapper[4730]: E0131 16:47:26.078581 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e61373e-9345-4a2a-a252-15b10ed8ae59" containerName="probe" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.078597 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e61373e-9345-4a2a-a252-15b10ed8ae59" containerName="probe" Jan 31 16:47:26 crc kubenswrapper[4730]: E0131 16:47:26.078604 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e61373e-9345-4a2a-a252-15b10ed8ae59" containerName="cinder-scheduler" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.078611 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e61373e-9345-4a2a-a252-15b10ed8ae59" containerName="cinder-scheduler" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.078785 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e61373e-9345-4a2a-a252-15b10ed8ae59" containerName="probe" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.078815 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e61373e-9345-4a2a-a252-15b10ed8ae59" containerName="cinder-scheduler" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.079083 4730 scope.go:117] "RemoveContainer" containerID="e620e32ed19e4fbd5b3de95ab357dd2d5b3ab980414856192ef08454dcd59f7d" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.079733 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.082563 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.102346 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.119217 4730 scope.go:117] "RemoveContainer" containerID="a3b9aa96106c040897ae7759c8c7e37b4c35ba48dfb1207fcdec7d8f7b5bd348" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.175041 4730 scope.go:117] "RemoveContainer" containerID="1c717feb04948860ffe61e8e59ace1903fbec0985f999c6eca36640a682381f5" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.263991 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f328aa35-7979-4ff9-ab15-57e088728259-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.264056 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f328aa35-7979-4ff9-ab15-57e088728259-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.264125 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzn87\" (UniqueName: \"kubernetes.io/projected/f328aa35-7979-4ff9-ab15-57e088728259-kube-api-access-vzn87\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.264323 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f328aa35-7979-4ff9-ab15-57e088728259-config-data\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.264374 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f328aa35-7979-4ff9-ab15-57e088728259-scripts\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.264412 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f328aa35-7979-4ff9-ab15-57e088728259-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.364987 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f328aa35-7979-4ff9-ab15-57e088728259-config-data\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.365044 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f328aa35-7979-4ff9-ab15-57e088728259-scripts\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.365064 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f328aa35-7979-4ff9-ab15-57e088728259-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.365121 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f328aa35-7979-4ff9-ab15-57e088728259-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.365138 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f328aa35-7979-4ff9-ab15-57e088728259-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.365179 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzn87\" (UniqueName: \"kubernetes.io/projected/f328aa35-7979-4ff9-ab15-57e088728259-kube-api-access-vzn87\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.373482 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f328aa35-7979-4ff9-ab15-57e088728259-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.377429 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f328aa35-7979-4ff9-ab15-57e088728259-config-data\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.386920 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f328aa35-7979-4ff9-ab15-57e088728259-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.387144 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f328aa35-7979-4ff9-ab15-57e088728259-scripts\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.387729 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f328aa35-7979-4ff9-ab15-57e088728259-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.387852 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzn87\" (UniqueName: \"kubernetes.io/projected/f328aa35-7979-4ff9-ab15-57e088728259-kube-api-access-vzn87\") pod \"cinder-scheduler-0\" (UID: \"f328aa35-7979-4ff9-ab15-57e088728259\") " pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.416691 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.490542 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e61373e-9345-4a2a-a252-15b10ed8ae59" path="/var/lib/kubelet/pods/8e61373e-9345-4a2a-a252-15b10ed8ae59/volumes" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.592741 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-959768976-4n77c" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.672332 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cae13f89-c09f-4e59-b3e5-7de6b4562d17-logs\") pod \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.672582 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-internal-tls-certs\") pod \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.672619 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m85q6\" (UniqueName: \"kubernetes.io/projected/cae13f89-c09f-4e59-b3e5-7de6b4562d17-kube-api-access-m85q6\") pod \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.672646 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-combined-ca-bundle\") pod \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.672709 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-public-tls-certs\") pod \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.672732 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-config-data\") pod \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.672784 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-scripts\") pod \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\" (UID: \"cae13f89-c09f-4e59-b3e5-7de6b4562d17\") " Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.678434 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae13f89-c09f-4e59-b3e5-7de6b4562d17-logs" (OuterVolumeSpecName: "logs") pod "cae13f89-c09f-4e59-b3e5-7de6b4562d17" (UID: "cae13f89-c09f-4e59-b3e5-7de6b4562d17"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.683413 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-scripts" (OuterVolumeSpecName: "scripts") pod "cae13f89-c09f-4e59-b3e5-7de6b4562d17" (UID: "cae13f89-c09f-4e59-b3e5-7de6b4562d17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.683571 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae13f89-c09f-4e59-b3e5-7de6b4562d17-kube-api-access-m85q6" (OuterVolumeSpecName: "kube-api-access-m85q6") pod "cae13f89-c09f-4e59-b3e5-7de6b4562d17" (UID: "cae13f89-c09f-4e59-b3e5-7de6b4562d17"). InnerVolumeSpecName "kube-api-access-m85q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.752048 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-config-data" (OuterVolumeSpecName: "config-data") pod "cae13f89-c09f-4e59-b3e5-7de6b4562d17" (UID: "cae13f89-c09f-4e59-b3e5-7de6b4562d17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.772099 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cae13f89-c09f-4e59-b3e5-7de6b4562d17" (UID: "cae13f89-c09f-4e59-b3e5-7de6b4562d17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.776400 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.776661 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cae13f89-c09f-4e59-b3e5-7de6b4562d17-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.776672 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m85q6\" (UniqueName: \"kubernetes.io/projected/cae13f89-c09f-4e59-b3e5-7de6b4562d17-kube-api-access-m85q6\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.776684 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.776722 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.812936 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cae13f89-c09f-4e59-b3e5-7de6b4562d17" (UID: "cae13f89-c09f-4e59-b3e5-7de6b4562d17"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.824521 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cae13f89-c09f-4e59-b3e5-7de6b4562d17" (UID: "cae13f89-c09f-4e59-b3e5-7de6b4562d17"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.878681 4730 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.878710 4730 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cae13f89-c09f-4e59-b3e5-7de6b4562d17-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.975182 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:47:26 crc kubenswrapper[4730]: I0131 16:47:26.975240 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.037761 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.059981 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff" exitCode=1 Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.060462 4730 scope.go:117] "RemoveContainer" containerID="e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c" Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.060474 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff"} Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.060534 4730 scope.go:117] "RemoveContainer" containerID="fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4" Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.060548 4730 scope.go:117] "RemoveContainer" containerID="4447520ba8817b50d0ba6b0a6b8a105c7d93b20b57b9de2f464b92326c6a1549" Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.060616 4730 scope.go:117] "RemoveContainer" containerID="78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff" Jan 31 16:47:27 crc kubenswrapper[4730]: E0131 16:47:27.060881 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.069996 4730 generic.go:334] "Generic (PLEG): container finished" podID="cae13f89-c09f-4e59-b3e5-7de6b4562d17" containerID="d79fe29d6b7b2f5dc336ccb6c5559eb6a4d8556905b8aced6a92c14d58e83596" exitCode=0 Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.070041 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-959768976-4n77c" event={"ID":"cae13f89-c09f-4e59-b3e5-7de6b4562d17","Type":"ContainerDied","Data":"d79fe29d6b7b2f5dc336ccb6c5559eb6a4d8556905b8aced6a92c14d58e83596"} Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.070069 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-959768976-4n77c" event={"ID":"cae13f89-c09f-4e59-b3e5-7de6b4562d17","Type":"ContainerDied","Data":"d272c96d271d7ba661b0cef74dd45c51771bcb418e4c59385569e5a8a9662d78"} Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.070136 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-959768976-4n77c" Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.130369 4730 scope.go:117] "RemoveContainer" containerID="d79fe29d6b7b2f5dc336ccb6c5559eb6a4d8556905b8aced6a92c14d58e83596" Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.144174 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-959768976-4n77c"] Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.150013 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-959768976-4n77c"] Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.217024 4730 scope.go:117] "RemoveContainer" containerID="636bd1a4f689eb8ea368a8d38ab04f75b9cd83f78eab14f2ef44f8420b69a5e0" Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.344410 4730 scope.go:117] "RemoveContainer" containerID="d79fe29d6b7b2f5dc336ccb6c5559eb6a4d8556905b8aced6a92c14d58e83596" Jan 31 16:47:27 crc kubenswrapper[4730]: E0131 16:47:27.344870 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d79fe29d6b7b2f5dc336ccb6c5559eb6a4d8556905b8aced6a92c14d58e83596\": container with ID starting with d79fe29d6b7b2f5dc336ccb6c5559eb6a4d8556905b8aced6a92c14d58e83596 not found: ID does not exist" containerID="d79fe29d6b7b2f5dc336ccb6c5559eb6a4d8556905b8aced6a92c14d58e83596" Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.344909 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79fe29d6b7b2f5dc336ccb6c5559eb6a4d8556905b8aced6a92c14d58e83596"} err="failed to get container status \"d79fe29d6b7b2f5dc336ccb6c5559eb6a4d8556905b8aced6a92c14d58e83596\": rpc error: code = NotFound desc = could not find container \"d79fe29d6b7b2f5dc336ccb6c5559eb6a4d8556905b8aced6a92c14d58e83596\": container with ID starting with d79fe29d6b7b2f5dc336ccb6c5559eb6a4d8556905b8aced6a92c14d58e83596 not found: ID does not exist" Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.344937 4730 scope.go:117] "RemoveContainer" containerID="636bd1a4f689eb8ea368a8d38ab04f75b9cd83f78eab14f2ef44f8420b69a5e0" Jan 31 16:47:27 crc kubenswrapper[4730]: E0131 16:47:27.350402 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"636bd1a4f689eb8ea368a8d38ab04f75b9cd83f78eab14f2ef44f8420b69a5e0\": container with ID starting with 636bd1a4f689eb8ea368a8d38ab04f75b9cd83f78eab14f2ef44f8420b69a5e0 not found: ID does not exist" containerID="636bd1a4f689eb8ea368a8d38ab04f75b9cd83f78eab14f2ef44f8420b69a5e0" Jan 31 16:47:27 crc kubenswrapper[4730]: I0131 16:47:27.350427 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"636bd1a4f689eb8ea368a8d38ab04f75b9cd83f78eab14f2ef44f8420b69a5e0"} err="failed to get container status \"636bd1a4f689eb8ea368a8d38ab04f75b9cd83f78eab14f2ef44f8420b69a5e0\": rpc error: code = NotFound desc = could not find container \"636bd1a4f689eb8ea368a8d38ab04f75b9cd83f78eab14f2ef44f8420b69a5e0\": container with ID starting with 636bd1a4f689eb8ea368a8d38ab04f75b9cd83f78eab14f2ef44f8420b69a5e0 not found: ID does not exist" Jan 31 16:47:28 crc kubenswrapper[4730]: I0131 16:47:28.097206 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f328aa35-7979-4ff9-ab15-57e088728259","Type":"ContainerStarted","Data":"7a157b2889c09694321887fbe52da47aa1469684ccdc0d30931d43020f4d6774"} Jan 31 16:47:28 crc kubenswrapper[4730]: I0131 16:47:28.097255 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f328aa35-7979-4ff9-ab15-57e088728259","Type":"ContainerStarted","Data":"9a67efa8b4eee600b5762c7db2b90c74c805d5e02d5546445fa39c6ec07d9686"} Jan 31 16:47:28 crc kubenswrapper[4730]: I0131 16:47:28.109957 4730 scope.go:117] "RemoveContainer" containerID="e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c" Jan 31 16:47:28 crc kubenswrapper[4730]: I0131 16:47:28.110083 4730 scope.go:117] "RemoveContainer" containerID="fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4" Jan 31 16:47:28 crc kubenswrapper[4730]: I0131 16:47:28.110224 4730 scope.go:117] "RemoveContainer" containerID="78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff" Jan 31 16:47:28 crc kubenswrapper[4730]: E0131 16:47:28.111053 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:47:28 crc kubenswrapper[4730]: I0131 16:47:28.475351 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae13f89-c09f-4e59-b3e5-7de6b4562d17" path="/var/lib/kubelet/pods/cae13f89-c09f-4e59-b3e5-7de6b4562d17/volumes" Jan 31 16:47:29 crc kubenswrapper[4730]: I0131 16:47:29.138221 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f328aa35-7979-4ff9-ab15-57e088728259","Type":"ContainerStarted","Data":"08aeb03229283fe24b6892a5e727f71cc9512c4f8b2e454af14b58c290d13356"} Jan 31 16:47:29 crc kubenswrapper[4730]: I0131 16:47:29.164419 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.164405217 podStartE2EDuration="3.164405217s" podCreationTimestamp="2026-01-31 16:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:47:29.157528467 +0000 UTC m=+1035.963585373" watchObservedRunningTime="2026-01-31 16:47:29.164405217 +0000 UTC m=+1035.970462133" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.328953 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5867f46d87-f8rf9"] Jan 31 16:47:30 crc kubenswrapper[4730]: E0131 16:47:30.329781 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae13f89-c09f-4e59-b3e5-7de6b4562d17" containerName="placement-log" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.329817 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae13f89-c09f-4e59-b3e5-7de6b4562d17" containerName="placement-log" Jan 31 16:47:30 crc kubenswrapper[4730]: E0131 16:47:30.329829 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae13f89-c09f-4e59-b3e5-7de6b4562d17" containerName="placement-api" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.329837 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae13f89-c09f-4e59-b3e5-7de6b4562d17" containerName="placement-api" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.330060 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae13f89-c09f-4e59-b3e5-7de6b4562d17" containerName="placement-api" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.330087 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae13f89-c09f-4e59-b3e5-7de6b4562d17" containerName="placement-log" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.331259 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.333785 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.333978 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.350864 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5867f46d87-f8rf9"] Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.442401 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7wpm\" (UniqueName: \"kubernetes.io/projected/4c3d9aec-6a99-480d-a7f3-0703ac92db04-kube-api-access-g7wpm\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.442442 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c3d9aec-6a99-480d-a7f3-0703ac92db04-config-data\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.442487 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c3d9aec-6a99-480d-a7f3-0703ac92db04-internal-tls-certs\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.442524 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c3d9aec-6a99-480d-a7f3-0703ac92db04-public-tls-certs\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.442599 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c3d9aec-6a99-480d-a7f3-0703ac92db04-run-httpd\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.442645 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c3d9aec-6a99-480d-a7f3-0703ac92db04-combined-ca-bundle\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.442715 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4c3d9aec-6a99-480d-a7f3-0703ac92db04-etc-swift\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.442747 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c3d9aec-6a99-480d-a7f3-0703ac92db04-log-httpd\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.544746 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c3d9aec-6a99-480d-a7f3-0703ac92db04-log-httpd\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.544792 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7wpm\" (UniqueName: \"kubernetes.io/projected/4c3d9aec-6a99-480d-a7f3-0703ac92db04-kube-api-access-g7wpm\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.544833 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c3d9aec-6a99-480d-a7f3-0703ac92db04-config-data\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.544864 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c3d9aec-6a99-480d-a7f3-0703ac92db04-internal-tls-certs\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.544900 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c3d9aec-6a99-480d-a7f3-0703ac92db04-public-tls-certs\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.544953 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c3d9aec-6a99-480d-a7f3-0703ac92db04-run-httpd\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.544986 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c3d9aec-6a99-480d-a7f3-0703ac92db04-combined-ca-bundle\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.545038 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4c3d9aec-6a99-480d-a7f3-0703ac92db04-etc-swift\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.549311 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c3d9aec-6a99-480d-a7f3-0703ac92db04-log-httpd\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.549374 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c3d9aec-6a99-480d-a7f3-0703ac92db04-run-httpd\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.558332 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4c3d9aec-6a99-480d-a7f3-0703ac92db04-etc-swift\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.562006 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c3d9aec-6a99-480d-a7f3-0703ac92db04-config-data\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.565829 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c3d9aec-6a99-480d-a7f3-0703ac92db04-internal-tls-certs\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.566904 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c3d9aec-6a99-480d-a7f3-0703ac92db04-combined-ca-bundle\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.569282 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7wpm\" (UniqueName: \"kubernetes.io/projected/4c3d9aec-6a99-480d-a7f3-0703ac92db04-kube-api-access-g7wpm\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.576678 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c3d9aec-6a99-480d-a7f3-0703ac92db04-public-tls-certs\") pod \"swift-proxy-5867f46d87-f8rf9\" (UID: \"4c3d9aec-6a99-480d-a7f3-0703ac92db04\") " pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:30 crc kubenswrapper[4730]: I0131 16:47:30.652468 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:31 crc kubenswrapper[4730]: I0131 16:47:31.262408 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5867f46d87-f8rf9"] Jan 31 16:47:31 crc kubenswrapper[4730]: I0131 16:47:31.418167 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 16:47:32 crc kubenswrapper[4730]: I0131 16:47:32.371925 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:47:32 crc kubenswrapper[4730]: I0131 16:47:32.372218 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerName="ceilometer-central-agent" containerID="cri-o://3fa22c4d744601ff6674179694376a99c4fef6f9d54c771026c5598766e2a9ff" gracePeriod=30 Jan 31 16:47:32 crc kubenswrapper[4730]: I0131 16:47:32.372417 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerName="ceilometer-notification-agent" containerID="cri-o://296b42a59e94cef411931555bddc305100a561217706090634d8c3fe6ce07a4a" gracePeriod=30 Jan 31 16:47:32 crc kubenswrapper[4730]: I0131 16:47:32.372479 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerName="proxy-httpd" containerID="cri-o://c07097b7fa261f34fdadae6ff4e82507af2c2a89feb0f9c5faa981e203395058" gracePeriod=30 Jan 31 16:47:32 crc kubenswrapper[4730]: I0131 16:47:32.372498 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerName="sg-core" containerID="cri-o://a2be787384af66bed63096f131082f64eaec12e23d326f16f0b0036499d0103b" gracePeriod=30 Jan 31 16:47:32 crc kubenswrapper[4730]: I0131 16:47:32.386533 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.173:3000/\": EOF" Jan 31 16:47:33 crc kubenswrapper[4730]: I0131 16:47:33.174959 4730 generic.go:334] "Generic (PLEG): container finished" podID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerID="c07097b7fa261f34fdadae6ff4e82507af2c2a89feb0f9c5faa981e203395058" exitCode=0 Jan 31 16:47:33 crc kubenswrapper[4730]: I0131 16:47:33.175266 4730 generic.go:334] "Generic (PLEG): container finished" podID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerID="a2be787384af66bed63096f131082f64eaec12e23d326f16f0b0036499d0103b" exitCode=2 Jan 31 16:47:33 crc kubenswrapper[4730]: I0131 16:47:33.175038 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"877c4ba1-eb00-492d-8ef4-afef049a1e25","Type":"ContainerDied","Data":"c07097b7fa261f34fdadae6ff4e82507af2c2a89feb0f9c5faa981e203395058"} Jan 31 16:47:33 crc kubenswrapper[4730]: I0131 16:47:33.175309 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"877c4ba1-eb00-492d-8ef4-afef049a1e25","Type":"ContainerDied","Data":"a2be787384af66bed63096f131082f64eaec12e23d326f16f0b0036499d0103b"} Jan 31 16:47:33 crc kubenswrapper[4730]: I0131 16:47:33.175325 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"877c4ba1-eb00-492d-8ef4-afef049a1e25","Type":"ContainerDied","Data":"3fa22c4d744601ff6674179694376a99c4fef6f9d54c771026c5598766e2a9ff"} Jan 31 16:47:33 crc kubenswrapper[4730]: I0131 16:47:33.175277 4730 generic.go:334] "Generic (PLEG): container finished" podID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerID="3fa22c4d744601ff6674179694376a99c4fef6f9d54c771026c5598766e2a9ff" exitCode=0 Jan 31 16:47:34 crc kubenswrapper[4730]: I0131 16:47:34.270981 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:47:34 crc kubenswrapper[4730]: I0131 16:47:34.272240 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="85700f98-5f9c-41da-9ef2-f5ff4aa785c6" containerName="glance-httpd" containerID="cri-o://a8a7a8a0768c4834e6bb57b74dfa6519e4934cfb4ae53e8a56073cc3617fae52" gracePeriod=30 Jan 31 16:47:34 crc kubenswrapper[4730]: I0131 16:47:34.272360 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="85700f98-5f9c-41da-9ef2-f5ff4aa785c6" containerName="glance-log" containerID="cri-o://c5444d6899e0440c48bfec22fe8f2bbc1c926b665fa3b1fb25b180bd7965a983" gracePeriod=30 Jan 31 16:47:35 crc kubenswrapper[4730]: I0131 16:47:35.123355 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-c4d975ccf-jbdgk" Jan 31 16:47:35 crc kubenswrapper[4730]: I0131 16:47:35.211880 4730 generic.go:334] "Generic (PLEG): container finished" podID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerID="296b42a59e94cef411931555bddc305100a561217706090634d8c3fe6ce07a4a" exitCode=0 Jan 31 16:47:35 crc kubenswrapper[4730]: I0131 16:47:35.211965 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"877c4ba1-eb00-492d-8ef4-afef049a1e25","Type":"ContainerDied","Data":"296b42a59e94cef411931555bddc305100a561217706090634d8c3fe6ce07a4a"} Jan 31 16:47:35 crc kubenswrapper[4730]: I0131 16:47:35.222330 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-777d75d768-bwvb5"] Jan 31 16:47:35 crc kubenswrapper[4730]: I0131 16:47:35.222613 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-777d75d768-bwvb5" podUID="e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd" containerName="neutron-api" containerID="cri-o://658085b48ccbefdbb7e5d35f8a9b0841000c825df9371e9c652ec33fbfb2e4d8" gracePeriod=30 Jan 31 16:47:35 crc kubenswrapper[4730]: I0131 16:47:35.222770 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-777d75d768-bwvb5" podUID="e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd" containerName="neutron-httpd" containerID="cri-o://e3bcccfe0fe7eed1685979817bef5f406aaac6239f2b3340e387feb64826855c" gracePeriod=30 Jan 31 16:47:35 crc kubenswrapper[4730]: I0131 16:47:35.226736 4730 generic.go:334] "Generic (PLEG): container finished" podID="85700f98-5f9c-41da-9ef2-f5ff4aa785c6" containerID="c5444d6899e0440c48bfec22fe8f2bbc1c926b665fa3b1fb25b180bd7965a983" exitCode=143 Jan 31 16:47:35 crc kubenswrapper[4730]: I0131 16:47:35.226775 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85700f98-5f9c-41da-9ef2-f5ff4aa785c6","Type":"ContainerDied","Data":"c5444d6899e0440c48bfec22fe8f2bbc1c926b665fa3b1fb25b180bd7965a983"} Jan 31 16:47:36 crc kubenswrapper[4730]: I0131 16:47:36.180869 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:47:36 crc kubenswrapper[4730]: I0131 16:47:36.181102 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9279482b-4a11-44db-9f64-2e396fd30ef3" containerName="glance-log" containerID="cri-o://07e1be1191d648c56d86b073aac3657992a636c4ebeeda7d77fc6ffe4e4ad296" gracePeriod=30 Jan 31 16:47:36 crc kubenswrapper[4730]: I0131 16:47:36.181237 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9279482b-4a11-44db-9f64-2e396fd30ef3" containerName="glance-httpd" containerID="cri-o://cf9109405b3aad8bfcf763da4f591a3060702b8e8a95539722799027cd60c7ea" gracePeriod=30 Jan 31 16:47:36 crc kubenswrapper[4730]: I0131 16:47:36.238865 4730 generic.go:334] "Generic (PLEG): container finished" podID="e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd" containerID="e3bcccfe0fe7eed1685979817bef5f406aaac6239f2b3340e387feb64826855c" exitCode=0 Jan 31 16:47:36 crc kubenswrapper[4730]: I0131 16:47:36.238904 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777d75d768-bwvb5" event={"ID":"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd","Type":"ContainerDied","Data":"e3bcccfe0fe7eed1685979817bef5f406aaac6239f2b3340e387feb64826855c"} Jan 31 16:47:36 crc kubenswrapper[4730]: I0131 16:47:36.672952 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 16:47:36 crc kubenswrapper[4730]: W0131 16:47:36.699013 4730 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3656b8f0_e1d3_4214_9c23_dd437a57f2ad.slice/crio-conmon-e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3656b8f0_e1d3_4214_9c23_dd437a57f2ad.slice/crio-conmon-e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c.scope: no such file or directory Jan 31 16:47:36 crc kubenswrapper[4730]: W0131 16:47:36.699242 4730 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3656b8f0_e1d3_4214_9c23_dd437a57f2ad.slice/crio-e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3656b8f0_e1d3_4214_9c23_dd437a57f2ad.slice/crio-e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c.scope: no such file or directory Jan 31 16:47:36 crc kubenswrapper[4730]: W0131 16:47:36.699345 4730 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3656b8f0_e1d3_4214_9c23_dd437a57f2ad.slice/crio-conmon-fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3656b8f0_e1d3_4214_9c23_dd437a57f2ad.slice/crio-conmon-fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4.scope: no such file or directory Jan 31 16:47:36 crc kubenswrapper[4730]: W0131 16:47:36.699425 4730 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3656b8f0_e1d3_4214_9c23_dd437a57f2ad.slice/crio-fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3656b8f0_e1d3_4214_9c23_dd437a57f2ad.slice/crio-fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4.scope: no such file or directory Jan 31 16:47:36 crc kubenswrapper[4730]: W0131 16:47:36.699490 4730 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3656b8f0_e1d3_4214_9c23_dd437a57f2ad.slice/crio-conmon-78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3656b8f0_e1d3_4214_9c23_dd437a57f2ad.slice/crio-conmon-78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff.scope: no such file or directory Jan 31 16:47:36 crc kubenswrapper[4730]: W0131 16:47:36.699556 4730 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3656b8f0_e1d3_4214_9c23_dd437a57f2ad.slice/crio-78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3656b8f0_e1d3_4214_9c23_dd437a57f2ad.slice/crio-78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff.scope: no such file or directory Jan 31 16:47:36 crc kubenswrapper[4730]: E0131 16:47:36.901532 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcae13f89_c09f_4e59_b3e5_7de6b4562d17.slice/crio-d272c96d271d7ba661b0cef74dd45c51771bcb418e4c59385569e5a8a9662d78\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod877c4ba1_eb00_492d_8ef4_afef049a1e25.slice/crio-3fa22c4d744601ff6674179694376a99c4fef6f9d54c771026c5598766e2a9ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85700f98_5f9c_41da_9ef2_f5ff4aa785c6.slice/crio-conmon-c5444d6899e0440c48bfec22fe8f2bbc1c926b665fa3b1fb25b180bd7965a983.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod877c4ba1_eb00_492d_8ef4_afef049a1e25.slice/crio-conmon-3fa22c4d744601ff6674179694376a99c4fef6f9d54c771026c5598766e2a9ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod877c4ba1_eb00_492d_8ef4_afef049a1e25.slice/crio-a2be787384af66bed63096f131082f64eaec12e23d326f16f0b0036499d0103b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4a9c06b_a7ce_4f27_97d9_fafb4b70f1dd.slice/crio-conmon-e3bcccfe0fe7eed1685979817bef5f406aaac6239f2b3340e387feb64826855c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod877c4ba1_eb00_492d_8ef4_afef049a1e25.slice/crio-conmon-296b42a59e94cef411931555bddc305100a561217706090634d8c3fe6ce07a4a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod877c4ba1_eb00_492d_8ef4_afef049a1e25.slice/crio-conmon-a2be787384af66bed63096f131082f64eaec12e23d326f16f0b0036499d0103b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod877c4ba1_eb00_492d_8ef4_afef049a1e25.slice/crio-conmon-c07097b7fa261f34fdadae6ff4e82507af2c2a89feb0f9c5faa981e203395058.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod877c4ba1_eb00_492d_8ef4_afef049a1e25.slice/crio-c07097b7fa261f34fdadae6ff4e82507af2c2a89feb0f9c5faa981e203395058.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcae13f89_c09f_4e59_b3e5_7de6b4562d17.slice/crio-d79fe29d6b7b2f5dc336ccb6c5559eb6a4d8556905b8aced6a92c14d58e83596.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcae13f89_c09f_4e59_b3e5_7de6b4562d17.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcae13f89_c09f_4e59_b3e5_7de6b4562d17.slice/crio-conmon-d79fe29d6b7b2f5dc336ccb6c5559eb6a4d8556905b8aced6a92c14d58e83596.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9279482b_4a11_44db_9f64_2e396fd30ef3.slice/crio-07e1be1191d648c56d86b073aac3657992a636c4ebeeda7d77fc6ffe4e4ad296.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85700f98_5f9c_41da_9ef2_f5ff4aa785c6.slice/crio-c5444d6899e0440c48bfec22fe8f2bbc1c926b665fa3b1fb25b180bd7965a983.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0374cd2d_1d23_4f00_893a_278af887d99b.slice/crio-91e328665f0dfb9fb05ca0d20e6343eb8d7f25e993535ec02909c8c02411ff47.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9279482b_4a11_44db_9f64_2e396fd30ef3.slice/crio-conmon-07e1be1191d648c56d86b073aac3657992a636c4ebeeda7d77fc6ffe4e4ad296.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4a9c06b_a7ce_4f27_97d9_fafb4b70f1dd.slice/crio-e3bcccfe0fe7eed1685979817bef5f406aaac6239f2b3340e387feb64826855c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod877c4ba1_eb00_492d_8ef4_afef049a1e25.slice/crio-296b42a59e94cef411931555bddc305100a561217706090634d8c3fe6ce07a4a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0374cd2d_1d23_4f00_893a_278af887d99b.slice/crio-conmon-91e328665f0dfb9fb05ca0d20e6343eb8d7f25e993535ec02909c8c02411ff47.scope\": RecentStats: unable to find data in memory cache]" Jan 31 16:47:37 crc kubenswrapper[4730]: I0131 16:47:37.247594 4730 generic.go:334] "Generic (PLEG): container finished" podID="9279482b-4a11-44db-9f64-2e396fd30ef3" containerID="07e1be1191d648c56d86b073aac3657992a636c4ebeeda7d77fc6ffe4e4ad296" exitCode=143 Jan 31 16:47:37 crc kubenswrapper[4730]: I0131 16:47:37.247672 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9279482b-4a11-44db-9f64-2e396fd30ef3","Type":"ContainerDied","Data":"07e1be1191d648c56d86b073aac3657992a636c4ebeeda7d77fc6ffe4e4ad296"} Jan 31 16:47:37 crc kubenswrapper[4730]: I0131 16:47:37.250734 4730 generic.go:334] "Generic (PLEG): container finished" podID="0374cd2d-1d23-4f00-893a-278af887d99b" containerID="91e328665f0dfb9fb05ca0d20e6343eb8d7f25e993535ec02909c8c02411ff47" exitCode=137 Jan 31 16:47:37 crc kubenswrapper[4730]: I0131 16:47:37.250835 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7788464654-cr95d" event={"ID":"0374cd2d-1d23-4f00-893a-278af887d99b","Type":"ContainerDied","Data":"91e328665f0dfb9fb05ca0d20e6343eb8d7f25e993535ec02909c8c02411ff47"} Jan 31 16:47:37 crc kubenswrapper[4730]: I0131 16:47:37.252874 4730 generic.go:334] "Generic (PLEG): container finished" podID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerID="5f76ea53478fba62d51bf2177248f8d97c1edacf725d569c9a1e0b691cca8300" exitCode=137 Jan 31 16:47:37 crc kubenswrapper[4730]: I0131 16:47:37.252901 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b5bd455fb-h66br" event={"ID":"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec","Type":"ContainerDied","Data":"5f76ea53478fba62d51bf2177248f8d97c1edacf725d569c9a1e0b691cca8300"} Jan 31 16:47:37 crc kubenswrapper[4730]: W0131 16:47:37.872970 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c3d9aec_6a99_480d_a7f3_0703ac92db04.slice/crio-7581db63e6aac4462b248d2304e52071aa15c5489afe6149b33c2d2bf5008db1 WatchSource:0}: Error finding container 7581db63e6aac4462b248d2304e52071aa15c5489afe6149b33c2d2bf5008db1: Status 404 returned error can't find the container with id 7581db63e6aac4462b248d2304e52071aa15c5489afe6149b33c2d2bf5008db1 Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.269599 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"7581db63e6aac4462b248d2304e52071aa15c5489afe6149b33c2d2bf5008db1"} Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.271682 4730 generic.go:334] "Generic (PLEG): container finished" podID="85700f98-5f9c-41da-9ef2-f5ff4aa785c6" containerID="a8a7a8a0768c4834e6bb57b74dfa6519e4934cfb4ae53e8a56073cc3617fae52" exitCode=0 Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.271707 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85700f98-5f9c-41da-9ef2-f5ff4aa785c6","Type":"ContainerDied","Data":"a8a7a8a0768c4834e6bb57b74dfa6519e4934cfb4ae53e8a56073cc3617fae52"} Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.538113 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.652763 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.700975 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-sg-core-conf-yaml\") pod \"877c4ba1-eb00-492d-8ef4-afef049a1e25\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.701471 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-logs\") pod \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.701495 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/877c4ba1-eb00-492d-8ef4-afef049a1e25-run-httpd\") pod \"877c4ba1-eb00-492d-8ef4-afef049a1e25\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.701513 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-config-data\") pod \"877c4ba1-eb00-492d-8ef4-afef049a1e25\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.701535 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-config-data\") pod \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.701568 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-public-tls-certs\") pod \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.701598 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-httpd-run\") pod \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.701831 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-scripts\") pod \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.701891 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9xqb\" (UniqueName: \"kubernetes.io/projected/877c4ba1-eb00-492d-8ef4-afef049a1e25-kube-api-access-c9xqb\") pod \"877c4ba1-eb00-492d-8ef4-afef049a1e25\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.701917 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-combined-ca-bundle\") pod \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.701963 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-combined-ca-bundle\") pod \"877c4ba1-eb00-492d-8ef4-afef049a1e25\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.701979 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-scripts\") pod \"877c4ba1-eb00-492d-8ef4-afef049a1e25\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.702023 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/877c4ba1-eb00-492d-8ef4-afef049a1e25-log-httpd\") pod \"877c4ba1-eb00-492d-8ef4-afef049a1e25\" (UID: \"877c4ba1-eb00-492d-8ef4-afef049a1e25\") " Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.702048 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx87k\" (UniqueName: \"kubernetes.io/projected/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-kube-api-access-cx87k\") pod \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.702084 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\" (UID: \"85700f98-5f9c-41da-9ef2-f5ff4aa785c6\") " Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.705286 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "85700f98-5f9c-41da-9ef2-f5ff4aa785c6" (UID: "85700f98-5f9c-41da-9ef2-f5ff4aa785c6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.705709 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-logs" (OuterVolumeSpecName: "logs") pod "85700f98-5f9c-41da-9ef2-f5ff4aa785c6" (UID: "85700f98-5f9c-41da-9ef2-f5ff4aa785c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.720083 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/877c4ba1-eb00-492d-8ef4-afef049a1e25-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "877c4ba1-eb00-492d-8ef4-afef049a1e25" (UID: "877c4ba1-eb00-492d-8ef4-afef049a1e25"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.720237 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "85700f98-5f9c-41da-9ef2-f5ff4aa785c6" (UID: "85700f98-5f9c-41da-9ef2-f5ff4aa785c6"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.721321 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/877c4ba1-eb00-492d-8ef4-afef049a1e25-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "877c4ba1-eb00-492d-8ef4-afef049a1e25" (UID: "877c4ba1-eb00-492d-8ef4-afef049a1e25"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.724881 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-scripts" (OuterVolumeSpecName: "scripts") pod "877c4ba1-eb00-492d-8ef4-afef049a1e25" (UID: "877c4ba1-eb00-492d-8ef4-afef049a1e25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.727102 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/877c4ba1-eb00-492d-8ef4-afef049a1e25-kube-api-access-c9xqb" (OuterVolumeSpecName: "kube-api-access-c9xqb") pod "877c4ba1-eb00-492d-8ef4-afef049a1e25" (UID: "877c4ba1-eb00-492d-8ef4-afef049a1e25"). InnerVolumeSpecName "kube-api-access-c9xqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.746950 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-scripts" (OuterVolumeSpecName: "scripts") pod "85700f98-5f9c-41da-9ef2-f5ff4aa785c6" (UID: "85700f98-5f9c-41da-9ef2-f5ff4aa785c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.751392 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-kube-api-access-cx87k" (OuterVolumeSpecName: "kube-api-access-cx87k") pod "85700f98-5f9c-41da-9ef2-f5ff4aa785c6" (UID: "85700f98-5f9c-41da-9ef2-f5ff4aa785c6"). InnerVolumeSpecName "kube-api-access-cx87k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.794247 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "877c4ba1-eb00-492d-8ef4-afef049a1e25" (UID: "877c4ba1-eb00-492d-8ef4-afef049a1e25"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.807942 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.807972 4730 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/877c4ba1-eb00-492d-8ef4-afef049a1e25-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.807982 4730 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.807990 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.807998 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9xqb\" (UniqueName: \"kubernetes.io/projected/877c4ba1-eb00-492d-8ef4-afef049a1e25-kube-api-access-c9xqb\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.808007 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.808015 4730 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/877c4ba1-eb00-492d-8ef4-afef049a1e25-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.808023 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx87k\" (UniqueName: \"kubernetes.io/projected/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-kube-api-access-cx87k\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.808043 4730 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.808051 4730 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.828281 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85700f98-5f9c-41da-9ef2-f5ff4aa785c6" (UID: "85700f98-5f9c-41da-9ef2-f5ff4aa785c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.838440 4730 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.865206 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-config-data" (OuterVolumeSpecName: "config-data") pod "85700f98-5f9c-41da-9ef2-f5ff4aa785c6" (UID: "85700f98-5f9c-41da-9ef2-f5ff4aa785c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.882912 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "85700f98-5f9c-41da-9ef2-f5ff4aa785c6" (UID: "85700f98-5f9c-41da-9ef2-f5ff4aa785c6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.891919 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "877c4ba1-eb00-492d-8ef4-afef049a1e25" (UID: "877c4ba1-eb00-492d-8ef4-afef049a1e25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.909311 4730 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.909340 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.909350 4730 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.909359 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85700f98-5f9c-41da-9ef2-f5ff4aa785c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.909369 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:38 crc kubenswrapper[4730]: I0131 16:47:38.929889 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-config-data" (OuterVolumeSpecName: "config-data") pod "877c4ba1-eb00-492d-8ef4-afef049a1e25" (UID: "877c4ba1-eb00-492d-8ef4-afef049a1e25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.011483 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/877c4ba1-eb00-492d-8ef4-afef049a1e25-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.280705 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7788464654-cr95d" event={"ID":"0374cd2d-1d23-4f00-893a-278af887d99b","Type":"ContainerStarted","Data":"d3b78b8ad3b0e77391240b39d80c0bb0e48d090ff2da0e7a2401f4bff87e7e59"} Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.284042 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b5bd455fb-h66br" event={"ID":"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec","Type":"ContainerStarted","Data":"80ae24fe31870e02341eacd37399cd3d3009e58750f2e437dca5b64be6345b4d"} Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.285628 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"3a0da846102d23267c09424d464bd75d31e24499d0a838028b36d95521a34e92"} Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.285743 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.285830 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.285890 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"959b08d8804ba2b55777eaef0dedc315ef0841896810507d75ced17f4a6d110a"} Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.289793 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.290026 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"877c4ba1-eb00-492d-8ef4-afef049a1e25","Type":"ContainerDied","Data":"218e82bc8b78afb2cfa7a51c4e862e02f10a94e7d4cf384eb9cc3f90bfd18195"} Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.290105 4730 scope.go:117] "RemoveContainer" containerID="c07097b7fa261f34fdadae6ff4e82507af2c2a89feb0f9c5faa981e203395058" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.292249 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d","Type":"ContainerStarted","Data":"6ea69152f772bba2d3b6acbd2e736de4e3985dc5ace3d0853057ecb141dc980d"} Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.303188 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85700f98-5f9c-41da-9ef2-f5ff4aa785c6","Type":"ContainerDied","Data":"f192d54e97801cfb1deb44cffaa445ae718419d1b97ab9ac21703041e3e0b798"} Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.303404 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.316584 4730 scope.go:117] "RemoveContainer" containerID="a2be787384af66bed63096f131082f64eaec12e23d326f16f0b0036499d0103b" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.333402 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5867f46d87-f8rf9" podStartSLOduration=9.333386929 podStartE2EDuration="9.333386929s" podCreationTimestamp="2026-01-31 16:47:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:47:39.328642214 +0000 UTC m=+1046.134699140" watchObservedRunningTime="2026-01-31 16:47:39.333386929 +0000 UTC m=+1046.139443845" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.379951 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="9279482b-4a11-44db-9f64-2e396fd30ef3" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.159:9292/healthcheck\": read tcp 10.217.0.2:58928->10.217.0.159:9292: read: connection reset by peer" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.379969 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="9279482b-4a11-44db-9f64-2e396fd30ef3" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.159:9292/healthcheck\": read tcp 10.217.0.2:58944->10.217.0.159:9292: read: connection reset by peer" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.406726 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.737050434 podStartE2EDuration="16.406701916s" podCreationTimestamp="2026-01-31 16:47:23 +0000 UTC" firstStartedPulling="2026-01-31 16:47:24.464207405 +0000 UTC m=+1031.270264321" lastFinishedPulling="2026-01-31 16:47:38.133858887 +0000 UTC m=+1044.939915803" observedRunningTime="2026-01-31 16:47:39.385126113 +0000 UTC m=+1046.191183029" watchObservedRunningTime="2026-01-31 16:47:39.406701916 +0000 UTC m=+1046.212758832" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.599893 4730 scope.go:117] "RemoveContainer" containerID="296b42a59e94cef411931555bddc305100a561217706090634d8c3fe6ce07a4a" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.612854 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.628367 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.660842 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.670262 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.674044 4730 scope.go:117] "RemoveContainer" containerID="3fa22c4d744601ff6674179694376a99c4fef6f9d54c771026c5598766e2a9ff" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.697673 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:47:39 crc kubenswrapper[4730]: E0131 16:47:39.698101 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerName="sg-core" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.698204 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerName="sg-core" Jan 31 16:47:39 crc kubenswrapper[4730]: E0131 16:47:39.698260 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerName="ceilometer-central-agent" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.698312 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerName="ceilometer-central-agent" Jan 31 16:47:39 crc kubenswrapper[4730]: E0131 16:47:39.698363 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85700f98-5f9c-41da-9ef2-f5ff4aa785c6" containerName="glance-httpd" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.698411 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="85700f98-5f9c-41da-9ef2-f5ff4aa785c6" containerName="glance-httpd" Jan 31 16:47:39 crc kubenswrapper[4730]: E0131 16:47:39.698466 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerName="proxy-httpd" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.698513 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerName="proxy-httpd" Jan 31 16:47:39 crc kubenswrapper[4730]: E0131 16:47:39.698575 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerName="ceilometer-notification-agent" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.698630 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerName="ceilometer-notification-agent" Jan 31 16:47:39 crc kubenswrapper[4730]: E0131 16:47:39.698692 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85700f98-5f9c-41da-9ef2-f5ff4aa785c6" containerName="glance-log" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.698741 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="85700f98-5f9c-41da-9ef2-f5ff4aa785c6" containerName="glance-log" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.698982 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="85700f98-5f9c-41da-9ef2-f5ff4aa785c6" containerName="glance-httpd" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.699069 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerName="sg-core" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.699127 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerName="ceilometer-central-agent" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.699196 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerName="ceilometer-notification-agent" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.699275 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="85700f98-5f9c-41da-9ef2-f5ff4aa785c6" containerName="glance-log" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.699330 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" containerName="proxy-httpd" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.700904 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.708123 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.708733 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.723745 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.725327 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.727574 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.727758 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.750415 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.763167 4730 scope.go:117] "RemoveContainer" containerID="a8a7a8a0768c4834e6bb57b74dfa6519e4934cfb4ae53e8a56073cc3617fae52" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.766827 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.827548 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25fee22-a834-4f4b-82f3-fc6deea85888-config-data\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.827627 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f25fee22-a834-4f4b-82f3-fc6deea85888-logs\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.827644 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h687j\" (UniqueName: \"kubernetes.io/projected/f25fee22-a834-4f4b-82f3-fc6deea85888-kube-api-access-h687j\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.827660 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc778499-6ac0-402f-865d-64323285c0dd-log-httpd\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.827676 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tznf\" (UniqueName: \"kubernetes.io/projected/bc778499-6ac0-402f-865d-64323285c0dd-kube-api-access-6tznf\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.827695 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f25fee22-a834-4f4b-82f3-fc6deea85888-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.827709 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-config-data\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.827732 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f25fee22-a834-4f4b-82f3-fc6deea85888-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.827752 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc778499-6ac0-402f-865d-64323285c0dd-run-httpd\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.827769 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-scripts\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.827782 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.827817 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.827837 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25fee22-a834-4f4b-82f3-fc6deea85888-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.827853 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25fee22-a834-4f4b-82f3-fc6deea85888-scripts\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.827868 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.837905 4730 scope.go:117] "RemoveContainer" containerID="c5444d6899e0440c48bfec22fe8f2bbc1c926b665fa3b1fb25b180bd7965a983" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.929078 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25fee22-a834-4f4b-82f3-fc6deea85888-config-data\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.929176 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f25fee22-a834-4f4b-82f3-fc6deea85888-logs\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.929196 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h687j\" (UniqueName: \"kubernetes.io/projected/f25fee22-a834-4f4b-82f3-fc6deea85888-kube-api-access-h687j\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.929216 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc778499-6ac0-402f-865d-64323285c0dd-log-httpd\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.929239 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tznf\" (UniqueName: \"kubernetes.io/projected/bc778499-6ac0-402f-865d-64323285c0dd-kube-api-access-6tznf\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.929259 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f25fee22-a834-4f4b-82f3-fc6deea85888-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.929275 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-config-data\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.929301 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f25fee22-a834-4f4b-82f3-fc6deea85888-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.929320 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc778499-6ac0-402f-865d-64323285c0dd-run-httpd\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.929339 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-scripts\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.929354 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.929373 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.929393 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25fee22-a834-4f4b-82f3-fc6deea85888-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.929409 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25fee22-a834-4f4b-82f3-fc6deea85888-scripts\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.929424 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.929842 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.932319 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc778499-6ac0-402f-865d-64323285c0dd-log-httpd\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.932715 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f25fee22-a834-4f4b-82f3-fc6deea85888-logs\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.940043 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25fee22-a834-4f4b-82f3-fc6deea85888-scripts\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.940338 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc778499-6ac0-402f-865d-64323285c0dd-run-httpd\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.940554 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f25fee22-a834-4f4b-82f3-fc6deea85888-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.945432 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25fee22-a834-4f4b-82f3-fc6deea85888-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.949332 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-config-data\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.949392 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25fee22-a834-4f4b-82f3-fc6deea85888-config-data\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.950701 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.957870 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.958231 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h687j\" (UniqueName: \"kubernetes.io/projected/f25fee22-a834-4f4b-82f3-fc6deea85888-kube-api-access-h687j\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.958303 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f25fee22-a834-4f4b-82f3-fc6deea85888-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.960650 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-scripts\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.967577 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tznf\" (UniqueName: \"kubernetes.io/projected/bc778499-6ac0-402f-865d-64323285c0dd-kube-api-access-6tznf\") pod \"ceilometer-0\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " pod="openstack/ceilometer-0" Jan 31 16:47:39 crc kubenswrapper[4730]: I0131 16:47:39.976663 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"f25fee22-a834-4f4b-82f3-fc6deea85888\") " pod="openstack/glance-default-external-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.032232 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.073204 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.115247 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.131355 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-combined-ca-bundle\") pod \"9279482b-4a11-44db-9f64-2e396fd30ef3\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.131422 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9279482b-4a11-44db-9f64-2e396fd30ef3-httpd-run\") pod \"9279482b-4a11-44db-9f64-2e396fd30ef3\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.131456 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"9279482b-4a11-44db-9f64-2e396fd30ef3\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.131529 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-config-data\") pod \"9279482b-4a11-44db-9f64-2e396fd30ef3\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.131551 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-scripts\") pod \"9279482b-4a11-44db-9f64-2e396fd30ef3\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.131634 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv5v8\" (UniqueName: \"kubernetes.io/projected/9279482b-4a11-44db-9f64-2e396fd30ef3-kube-api-access-qv5v8\") pod \"9279482b-4a11-44db-9f64-2e396fd30ef3\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.131699 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-internal-tls-certs\") pod \"9279482b-4a11-44db-9f64-2e396fd30ef3\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.131735 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9279482b-4a11-44db-9f64-2e396fd30ef3-logs\") pod \"9279482b-4a11-44db-9f64-2e396fd30ef3\" (UID: \"9279482b-4a11-44db-9f64-2e396fd30ef3\") " Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.133060 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9279482b-4a11-44db-9f64-2e396fd30ef3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9279482b-4a11-44db-9f64-2e396fd30ef3" (UID: "9279482b-4a11-44db-9f64-2e396fd30ef3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.133302 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9279482b-4a11-44db-9f64-2e396fd30ef3-logs" (OuterVolumeSpecName: "logs") pod "9279482b-4a11-44db-9f64-2e396fd30ef3" (UID: "9279482b-4a11-44db-9f64-2e396fd30ef3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.150978 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9279482b-4a11-44db-9f64-2e396fd30ef3-kube-api-access-qv5v8" (OuterVolumeSpecName: "kube-api-access-qv5v8") pod "9279482b-4a11-44db-9f64-2e396fd30ef3" (UID: "9279482b-4a11-44db-9f64-2e396fd30ef3"). InnerVolumeSpecName "kube-api-access-qv5v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.156089 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-scripts" (OuterVolumeSpecName: "scripts") pod "9279482b-4a11-44db-9f64-2e396fd30ef3" (UID: "9279482b-4a11-44db-9f64-2e396fd30ef3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.162025 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "9279482b-4a11-44db-9f64-2e396fd30ef3" (UID: "9279482b-4a11-44db-9f64-2e396fd30ef3"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.236342 4730 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.236374 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.236394 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv5v8\" (UniqueName: \"kubernetes.io/projected/9279482b-4a11-44db-9f64-2e396fd30ef3-kube-api-access-qv5v8\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.236407 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9279482b-4a11-44db-9f64-2e396fd30ef3-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.236415 4730 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9279482b-4a11-44db-9f64-2e396fd30ef3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.244270 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-config-data" (OuterVolumeSpecName: "config-data") pod "9279482b-4a11-44db-9f64-2e396fd30ef3" (UID: "9279482b-4a11-44db-9f64-2e396fd30ef3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.244890 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9279482b-4a11-44db-9f64-2e396fd30ef3" (UID: "9279482b-4a11-44db-9f64-2e396fd30ef3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.268011 4730 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.286697 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9279482b-4a11-44db-9f64-2e396fd30ef3" (UID: "9279482b-4a11-44db-9f64-2e396fd30ef3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.337665 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.337691 4730 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.337702 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9279482b-4a11-44db-9f64-2e396fd30ef3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.337711 4730 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.345572 4730 generic.go:334] "Generic (PLEG): container finished" podID="9279482b-4a11-44db-9f64-2e396fd30ef3" containerID="cf9109405b3aad8bfcf763da4f591a3060702b8e8a95539722799027cd60c7ea" exitCode=0 Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.345632 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9279482b-4a11-44db-9f64-2e396fd30ef3","Type":"ContainerDied","Data":"cf9109405b3aad8bfcf763da4f591a3060702b8e8a95539722799027cd60c7ea"} Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.345657 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9279482b-4a11-44db-9f64-2e396fd30ef3","Type":"ContainerDied","Data":"bc663fd351805108d6a580b5e5a0c784a543800f7d1017de0f6e96a3eea050f1"} Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.345673 4730 scope.go:117] "RemoveContainer" containerID="cf9109405b3aad8bfcf763da4f591a3060702b8e8a95539722799027cd60c7ea" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.345764 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.384267 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="ee85bc5fc59c3f0b6790a01a8bec9adde51e9224843a4dc959082405198dc125" exitCode=1 Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.384350 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"ee85bc5fc59c3f0b6790a01a8bec9adde51e9224843a4dc959082405198dc125"} Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.385082 4730 scope.go:117] "RemoveContainer" containerID="e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.385142 4730 scope.go:117] "RemoveContainer" containerID="fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.385163 4730 scope.go:117] "RemoveContainer" containerID="ee85bc5fc59c3f0b6790a01a8bec9adde51e9224843a4dc959082405198dc125" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.385232 4730 scope.go:117] "RemoveContainer" containerID="78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.448074 4730 scope.go:117] "RemoveContainer" containerID="07e1be1191d648c56d86b073aac3657992a636c4ebeeda7d77fc6ffe4e4ad296" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.532972 4730 scope.go:117] "RemoveContainer" containerID="cf9109405b3aad8bfcf763da4f591a3060702b8e8a95539722799027cd60c7ea" Jan 31 16:47:40 crc kubenswrapper[4730]: E0131 16:47:40.535143 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf9109405b3aad8bfcf763da4f591a3060702b8e8a95539722799027cd60c7ea\": container with ID starting with cf9109405b3aad8bfcf763da4f591a3060702b8e8a95539722799027cd60c7ea not found: ID does not exist" containerID="cf9109405b3aad8bfcf763da4f591a3060702b8e8a95539722799027cd60c7ea" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.535172 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf9109405b3aad8bfcf763da4f591a3060702b8e8a95539722799027cd60c7ea"} err="failed to get container status \"cf9109405b3aad8bfcf763da4f591a3060702b8e8a95539722799027cd60c7ea\": rpc error: code = NotFound desc = could not find container \"cf9109405b3aad8bfcf763da4f591a3060702b8e8a95539722799027cd60c7ea\": container with ID starting with cf9109405b3aad8bfcf763da4f591a3060702b8e8a95539722799027cd60c7ea not found: ID does not exist" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.535192 4730 scope.go:117] "RemoveContainer" containerID="07e1be1191d648c56d86b073aac3657992a636c4ebeeda7d77fc6ffe4e4ad296" Jan 31 16:47:40 crc kubenswrapper[4730]: E0131 16:47:40.535620 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07e1be1191d648c56d86b073aac3657992a636c4ebeeda7d77fc6ffe4e4ad296\": container with ID starting with 07e1be1191d648c56d86b073aac3657992a636c4ebeeda7d77fc6ffe4e4ad296 not found: ID does not exist" containerID="07e1be1191d648c56d86b073aac3657992a636c4ebeeda7d77fc6ffe4e4ad296" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.535635 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07e1be1191d648c56d86b073aac3657992a636c4ebeeda7d77fc6ffe4e4ad296"} err="failed to get container status \"07e1be1191d648c56d86b073aac3657992a636c4ebeeda7d77fc6ffe4e4ad296\": rpc error: code = NotFound desc = could not find container \"07e1be1191d648c56d86b073aac3657992a636c4ebeeda7d77fc6ffe4e4ad296\": container with ID starting with 07e1be1191d648c56d86b073aac3657992a636c4ebeeda7d77fc6ffe4e4ad296 not found: ID does not exist" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.545362 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85700f98-5f9c-41da-9ef2-f5ff4aa785c6" path="/var/lib/kubelet/pods/85700f98-5f9c-41da-9ef2-f5ff4aa785c6/volumes" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.552051 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="877c4ba1-eb00-492d-8ef4-afef049a1e25" path="/var/lib/kubelet/pods/877c4ba1-eb00-492d-8ef4-afef049a1e25/volumes" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.582993 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.596658 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.626762 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:47:40 crc kubenswrapper[4730]: E0131 16:47:40.627195 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9279482b-4a11-44db-9f64-2e396fd30ef3" containerName="glance-log" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.627212 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9279482b-4a11-44db-9f64-2e396fd30ef3" containerName="glance-log" Jan 31 16:47:40 crc kubenswrapper[4730]: E0131 16:47:40.627238 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9279482b-4a11-44db-9f64-2e396fd30ef3" containerName="glance-httpd" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.627245 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9279482b-4a11-44db-9f64-2e396fd30ef3" containerName="glance-httpd" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.627443 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9279482b-4a11-44db-9f64-2e396fd30ef3" containerName="glance-httpd" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.627460 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9279482b-4a11-44db-9f64-2e396fd30ef3" containerName="glance-log" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.628426 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.631251 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.631531 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.643411 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.673303 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78823119-dbb2-462d-8c77-b9df0742a7a9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.673359 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4l9k\" (UniqueName: \"kubernetes.io/projected/78823119-dbb2-462d-8c77-b9df0742a7a9-kube-api-access-s4l9k\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.673387 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78823119-dbb2-462d-8c77-b9df0742a7a9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.673409 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78823119-dbb2-462d-8c77-b9df0742a7a9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.673454 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78823119-dbb2-462d-8c77-b9df0742a7a9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.673475 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.673488 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78823119-dbb2-462d-8c77-b9df0742a7a9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.673532 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78823119-dbb2-462d-8c77-b9df0742a7a9-logs\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.766657 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:47:40 crc kubenswrapper[4730]: E0131 16:47:40.771729 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.780092 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78823119-dbb2-462d-8c77-b9df0742a7a9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.782636 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.785264 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.785332 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78823119-dbb2-462d-8c77-b9df0742a7a9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.785393 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78823119-dbb2-462d-8c77-b9df0742a7a9-logs\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.785692 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78823119-dbb2-462d-8c77-b9df0742a7a9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.785717 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4l9k\" (UniqueName: \"kubernetes.io/projected/78823119-dbb2-462d-8c77-b9df0742a7a9-kube-api-access-s4l9k\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.785749 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78823119-dbb2-462d-8c77-b9df0742a7a9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.785777 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78823119-dbb2-462d-8c77-b9df0742a7a9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.786234 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78823119-dbb2-462d-8c77-b9df0742a7a9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.787298 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78823119-dbb2-462d-8c77-b9df0742a7a9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.789239 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78823119-dbb2-462d-8c77-b9df0742a7a9-logs\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.796837 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78823119-dbb2-462d-8c77-b9df0742a7a9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.797715 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78823119-dbb2-462d-8c77-b9df0742a7a9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.799746 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78823119-dbb2-462d-8c77-b9df0742a7a9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.805365 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4l9k\" (UniqueName: \"kubernetes.io/projected/78823119-dbb2-462d-8c77-b9df0742a7a9-kube-api-access-s4l9k\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.818353 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"78823119-dbb2-462d-8c77-b9df0742a7a9\") " pod="openstack/glance-default-internal-api-0" Jan 31 16:47:40 crc kubenswrapper[4730]: I0131 16:47:40.949270 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.067518 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.266424 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.418889 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-config\") pod \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.418940 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbw98\" (UniqueName: \"kubernetes.io/projected/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-kube-api-access-xbw98\") pod \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.418976 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-ovndb-tls-certs\") pod \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.419008 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-httpd-config\") pod \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.419035 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-combined-ca-bundle\") pod \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\" (UID: \"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd\") " Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.441051 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-kube-api-access-xbw98" (OuterVolumeSpecName: "kube-api-access-xbw98") pod "e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd" (UID: "e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd"). InnerVolumeSpecName "kube-api-access-xbw98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.446652 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd" (UID: "e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.463386 4730 generic.go:334] "Generic (PLEG): container finished" podID="e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd" containerID="658085b48ccbefdbb7e5d35f8a9b0841000c825df9371e9c652ec33fbfb2e4d8" exitCode=0 Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.463480 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777d75d768-bwvb5" event={"ID":"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd","Type":"ContainerDied","Data":"658085b48ccbefdbb7e5d35f8a9b0841000c825df9371e9c652ec33fbfb2e4d8"} Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.463529 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777d75d768-bwvb5" event={"ID":"e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd","Type":"ContainerDied","Data":"a238526406f8be00165e602961e1a70f1bc9fcc4dce196769b32f5f419d3c375"} Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.463547 4730 scope.go:117] "RemoveContainer" containerID="e3bcccfe0fe7eed1685979817bef5f406aaac6239f2b3340e387feb64826855c" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.463678 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-777d75d768-bwvb5" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.506355 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"57f18dcfb7530a415b40c27dcda7694fcabb603d09c2b77a985646d961881789"} Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.507157 4730 scope.go:117] "RemoveContainer" containerID="e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.507219 4730 scope.go:117] "RemoveContainer" containerID="fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.507311 4730 scope.go:117] "RemoveContainer" containerID="78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff" Jan 31 16:47:41 crc kubenswrapper[4730]: E0131 16:47:41.507569 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.512822 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc778499-6ac0-402f-865d-64323285c0dd","Type":"ContainerStarted","Data":"a9ed679a10e4b7178edd735e00b887b182f42be83c78b1429bbd97e5a4b60835"} Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.520266 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f25fee22-a834-4f4b-82f3-fc6deea85888","Type":"ContainerStarted","Data":"90ce4ab0810488682852db0ce0e531a4e1f996a5b6c02f8e3f5143882fb53112"} Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.523058 4730 scope.go:117] "RemoveContainer" containerID="658085b48ccbefdbb7e5d35f8a9b0841000c825df9371e9c652ec33fbfb2e4d8" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.523882 4730 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.523957 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbw98\" (UniqueName: \"kubernetes.io/projected/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-kube-api-access-xbw98\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.527909 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="959b08d8804ba2b55777eaef0dedc315ef0841896810507d75ced17f4a6d110a" exitCode=1 Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.528181 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"959b08d8804ba2b55777eaef0dedc315ef0841896810507d75ced17f4a6d110a"} Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.528727 4730 scope.go:117] "RemoveContainer" containerID="959b08d8804ba2b55777eaef0dedc315ef0841896810507d75ced17f4a6d110a" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.549218 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd" (UID: "e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.607512 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-config" (OuterVolumeSpecName: "config") pod "e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd" (UID: "e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.627645 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.627670 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.629298 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd" (UID: "e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.631519 4730 scope.go:117] "RemoveContainer" containerID="e3bcccfe0fe7eed1685979817bef5f406aaac6239f2b3340e387feb64826855c" Jan 31 16:47:41 crc kubenswrapper[4730]: E0131 16:47:41.638914 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3bcccfe0fe7eed1685979817bef5f406aaac6239f2b3340e387feb64826855c\": container with ID starting with e3bcccfe0fe7eed1685979817bef5f406aaac6239f2b3340e387feb64826855c not found: ID does not exist" containerID="e3bcccfe0fe7eed1685979817bef5f406aaac6239f2b3340e387feb64826855c" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.638957 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3bcccfe0fe7eed1685979817bef5f406aaac6239f2b3340e387feb64826855c"} err="failed to get container status \"e3bcccfe0fe7eed1685979817bef5f406aaac6239f2b3340e387feb64826855c\": rpc error: code = NotFound desc = could not find container \"e3bcccfe0fe7eed1685979817bef5f406aaac6239f2b3340e387feb64826855c\": container with ID starting with e3bcccfe0fe7eed1685979817bef5f406aaac6239f2b3340e387feb64826855c not found: ID does not exist" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.638985 4730 scope.go:117] "RemoveContainer" containerID="658085b48ccbefdbb7e5d35f8a9b0841000c825df9371e9c652ec33fbfb2e4d8" Jan 31 16:47:41 crc kubenswrapper[4730]: E0131 16:47:41.647009 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"658085b48ccbefdbb7e5d35f8a9b0841000c825df9371e9c652ec33fbfb2e4d8\": container with ID starting with 658085b48ccbefdbb7e5d35f8a9b0841000c825df9371e9c652ec33fbfb2e4d8 not found: ID does not exist" containerID="658085b48ccbefdbb7e5d35f8a9b0841000c825df9371e9c652ec33fbfb2e4d8" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.647050 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"658085b48ccbefdbb7e5d35f8a9b0841000c825df9371e9c652ec33fbfb2e4d8"} err="failed to get container status \"658085b48ccbefdbb7e5d35f8a9b0841000c825df9371e9c652ec33fbfb2e4d8\": rpc error: code = NotFound desc = could not find container \"658085b48ccbefdbb7e5d35f8a9b0841000c825df9371e9c652ec33fbfb2e4d8\": container with ID starting with 658085b48ccbefdbb7e5d35f8a9b0841000c825df9371e9c652ec33fbfb2e4d8 not found: ID does not exist" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.691661 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.731088 4730 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.804440 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-777d75d768-bwvb5"] Jan 31 16:47:41 crc kubenswrapper[4730]: I0131 16:47:41.808590 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-777d75d768-bwvb5"] Jan 31 16:47:42 crc kubenswrapper[4730]: I0131 16:47:42.043247 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:47:42 crc kubenswrapper[4730]: I0131 16:47:42.490629 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9279482b-4a11-44db-9f64-2e396fd30ef3" path="/var/lib/kubelet/pods/9279482b-4a11-44db-9f64-2e396fd30ef3/volumes" Jan 31 16:47:42 crc kubenswrapper[4730]: I0131 16:47:42.492201 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd" path="/var/lib/kubelet/pods/e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd/volumes" Jan 31 16:47:42 crc kubenswrapper[4730]: I0131 16:47:42.565875 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"14c8cd1386a4bfd252f26bfcd129b0d347728b89bfbf7f1420a214ce4f84f868"} Jan 31 16:47:42 crc kubenswrapper[4730]: I0131 16:47:42.567101 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:42 crc kubenswrapper[4730]: I0131 16:47:42.576962 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78823119-dbb2-462d-8c77-b9df0742a7a9","Type":"ContainerStarted","Data":"c4ea95dc184b9264eb84916a106f524d9197ec0d364e9a96f829eb74fbd451fb"} Jan 31 16:47:42 crc kubenswrapper[4730]: I0131 16:47:42.602206 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc778499-6ac0-402f-865d-64323285c0dd","Type":"ContainerStarted","Data":"9e4cfa7b64426a6586f64a94371048feb0957a6755e037b55addb3c985587bd1"} Jan 31 16:47:42 crc kubenswrapper[4730]: I0131 16:47:42.608593 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f25fee22-a834-4f4b-82f3-fc6deea85888","Type":"ContainerStarted","Data":"520e75153e554a8f0844a52bb1f5631ad2bdc5d66d1ab253df4f4548c0cfb3b5"} Jan 31 16:47:42 crc kubenswrapper[4730]: I0131 16:47:42.608954 4730 scope.go:117] "RemoveContainer" containerID="e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c" Jan 31 16:47:42 crc kubenswrapper[4730]: I0131 16:47:42.609017 4730 scope.go:117] "RemoveContainer" containerID="fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4" Jan 31 16:47:42 crc kubenswrapper[4730]: I0131 16:47:42.609105 4730 scope.go:117] "RemoveContainer" containerID="78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff" Jan 31 16:47:42 crc kubenswrapper[4730]: E0131 16:47:42.609350 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:47:43 crc kubenswrapper[4730]: I0131 16:47:43.617477 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78823119-dbb2-462d-8c77-b9df0742a7a9","Type":"ContainerStarted","Data":"75003f1db15899a5d11926da0ef356b985e8d82baa577896c84379e77a80373d"} Jan 31 16:47:43 crc kubenswrapper[4730]: I0131 16:47:43.617904 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78823119-dbb2-462d-8c77-b9df0742a7a9","Type":"ContainerStarted","Data":"372573c1bdf13f5fa5859d154231d64c134774e65203d0272b943c40ebdba92c"} Jan 31 16:47:43 crc kubenswrapper[4730]: I0131 16:47:43.620768 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc778499-6ac0-402f-865d-64323285c0dd","Type":"ContainerStarted","Data":"2f5e6cb32e8c991b9e26a8738c6dfae87985ef057badb02cdd495384174c3cbd"} Jan 31 16:47:43 crc kubenswrapper[4730]: I0131 16:47:43.620812 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc778499-6ac0-402f-865d-64323285c0dd","Type":"ContainerStarted","Data":"08c03869ae7ec534af1e331a95f295ebb5c18e3d9bc6bfbf9c482458e8103995"} Jan 31 16:47:43 crc kubenswrapper[4730]: I0131 16:47:43.622605 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f25fee22-a834-4f4b-82f3-fc6deea85888","Type":"ContainerStarted","Data":"32c60af2ec98c1efb325b1495121729acbb8a4d8913c1c7b027dd638f41b75ee"} Jan 31 16:47:43 crc kubenswrapper[4730]: I0131 16:47:43.638746 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.638727077 podStartE2EDuration="3.638727077s" podCreationTimestamp="2026-01-31 16:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:47:43.634653959 +0000 UTC m=+1050.440710875" watchObservedRunningTime="2026-01-31 16:47:43.638727077 +0000 UTC m=+1050.444783993" Jan 31 16:47:43 crc kubenswrapper[4730]: I0131 16:47:43.657770 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:47:43 crc kubenswrapper[4730]: I0131 16:47:43.666483 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.666466429 podStartE2EDuration="4.666466429s" podCreationTimestamp="2026-01-31 16:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:47:43.659938351 +0000 UTC m=+1050.465995277" watchObservedRunningTime="2026-01-31 16:47:43.666466429 +0000 UTC m=+1050.472523345" Jan 31 16:47:44 crc kubenswrapper[4730]: I0131 16:47:44.632968 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="14c8cd1386a4bfd252f26bfcd129b0d347728b89bfbf7f1420a214ce4f84f868" exitCode=1 Jan 31 16:47:44 crc kubenswrapper[4730]: I0131 16:47:44.633044 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"14c8cd1386a4bfd252f26bfcd129b0d347728b89bfbf7f1420a214ce4f84f868"} Jan 31 16:47:44 crc kubenswrapper[4730]: I0131 16:47:44.633355 4730 scope.go:117] "RemoveContainer" containerID="959b08d8804ba2b55777eaef0dedc315ef0841896810507d75ced17f4a6d110a" Jan 31 16:47:44 crc kubenswrapper[4730]: I0131 16:47:44.634480 4730 scope.go:117] "RemoveContainer" containerID="14c8cd1386a4bfd252f26bfcd129b0d347728b89bfbf7f1420a214ce4f84f868" Jan 31 16:47:44 crc kubenswrapper[4730]: E0131 16:47:44.634675 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:47:44 crc kubenswrapper[4730]: I0131 16:47:44.643154 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:47:45 crc kubenswrapper[4730]: I0131 16:47:45.654879 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:45 crc kubenswrapper[4730]: I0131 16:47:45.660921 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:47:45 crc kubenswrapper[4730]: I0131 16:47:45.661766 4730 scope.go:117] "RemoveContainer" containerID="14c8cd1386a4bfd252f26bfcd129b0d347728b89bfbf7f1420a214ce4f84f868" Jan 31 16:47:45 crc kubenswrapper[4730]: E0131 16:47:45.662044 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:47:45 crc kubenswrapper[4730]: I0131 16:47:45.674877 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:47:45 crc kubenswrapper[4730]: I0131 16:47:45.705931 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:47:46 crc kubenswrapper[4730]: I0131 16:47:46.620314 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7788464654-cr95d" Jan 31 16:47:46 crc kubenswrapper[4730]: I0131 16:47:46.620601 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7788464654-cr95d" Jan 31 16:47:46 crc kubenswrapper[4730]: I0131 16:47:46.668617 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc778499-6ac0-402f-865d-64323285c0dd","Type":"ContainerStarted","Data":"82b99ef679cbd5c79579342569105c4c53dd42057455e4a0fc59c8d7f8df296d"} Jan 31 16:47:46 crc kubenswrapper[4730]: I0131 16:47:46.668890 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc778499-6ac0-402f-865d-64323285c0dd" containerName="sg-core" containerID="cri-o://2f5e6cb32e8c991b9e26a8738c6dfae87985ef057badb02cdd495384174c3cbd" gracePeriod=30 Jan 31 16:47:46 crc kubenswrapper[4730]: I0131 16:47:46.668935 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc778499-6ac0-402f-865d-64323285c0dd" containerName="ceilometer-notification-agent" containerID="cri-o://08c03869ae7ec534af1e331a95f295ebb5c18e3d9bc6bfbf9c482458e8103995" gracePeriod=30 Jan 31 16:47:46 crc kubenswrapper[4730]: I0131 16:47:46.668887 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc778499-6ac0-402f-865d-64323285c0dd" containerName="proxy-httpd" containerID="cri-o://82b99ef679cbd5c79579342569105c4c53dd42057455e4a0fc59c8d7f8df296d" gracePeriod=30 Jan 31 16:47:46 crc kubenswrapper[4730]: I0131 16:47:46.669018 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 16:47:46 crc kubenswrapper[4730]: I0131 16:47:46.668783 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc778499-6ac0-402f-865d-64323285c0dd" containerName="ceilometer-central-agent" containerID="cri-o://9e4cfa7b64426a6586f64a94371048feb0957a6755e037b55addb3c985587bd1" gracePeriod=30 Jan 31 16:47:46 crc kubenswrapper[4730]: I0131 16:47:46.733430 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:47:46 crc kubenswrapper[4730]: I0131 16:47:46.733667 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:47:47 crc kubenswrapper[4730]: E0131 16:47:47.150809 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc778499_6ac0_402f_865d_64323285c0dd.slice/crio-conmon-08c03869ae7ec534af1e331a95f295ebb5c18e3d9bc6bfbf9c482458e8103995.scope\": RecentStats: unable to find data in memory cache]" Jan 31 16:47:47 crc kubenswrapper[4730]: I0131 16:47:47.678389 4730 generic.go:334] "Generic (PLEG): container finished" podID="bc778499-6ac0-402f-865d-64323285c0dd" containerID="82b99ef679cbd5c79579342569105c4c53dd42057455e4a0fc59c8d7f8df296d" exitCode=0 Jan 31 16:47:47 crc kubenswrapper[4730]: I0131 16:47:47.678719 4730 generic.go:334] "Generic (PLEG): container finished" podID="bc778499-6ac0-402f-865d-64323285c0dd" containerID="2f5e6cb32e8c991b9e26a8738c6dfae87985ef057badb02cdd495384174c3cbd" exitCode=2 Jan 31 16:47:47 crc kubenswrapper[4730]: I0131 16:47:47.678454 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc778499-6ac0-402f-865d-64323285c0dd","Type":"ContainerDied","Data":"82b99ef679cbd5c79579342569105c4c53dd42057455e4a0fc59c8d7f8df296d"} Jan 31 16:47:47 crc kubenswrapper[4730]: I0131 16:47:47.678820 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc778499-6ac0-402f-865d-64323285c0dd","Type":"ContainerDied","Data":"2f5e6cb32e8c991b9e26a8738c6dfae87985ef057badb02cdd495384174c3cbd"} Jan 31 16:47:47 crc kubenswrapper[4730]: I0131 16:47:47.678835 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc778499-6ac0-402f-865d-64323285c0dd","Type":"ContainerDied","Data":"08c03869ae7ec534af1e331a95f295ebb5c18e3d9bc6bfbf9c482458e8103995"} Jan 31 16:47:47 crc kubenswrapper[4730]: I0131 16:47:47.678733 4730 generic.go:334] "Generic (PLEG): container finished" podID="bc778499-6ac0-402f-865d-64323285c0dd" containerID="08c03869ae7ec534af1e331a95f295ebb5c18e3d9bc6bfbf9c482458e8103995" exitCode=0 Jan 31 16:47:48 crc kubenswrapper[4730]: I0131 16:47:48.658460 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:47:50 crc kubenswrapper[4730]: I0131 16:47:50.116465 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 16:47:50 crc kubenswrapper[4730]: I0131 16:47:50.116729 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 16:47:50 crc kubenswrapper[4730]: I0131 16:47:50.158399 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 16:47:50 crc kubenswrapper[4730]: I0131 16:47:50.182710 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.373514147 podStartE2EDuration="11.182692782s" podCreationTimestamp="2026-01-31 16:47:39 +0000 UTC" firstStartedPulling="2026-01-31 16:47:40.779879854 +0000 UTC m=+1047.585936770" lastFinishedPulling="2026-01-31 16:47:45.589058489 +0000 UTC m=+1052.395115405" observedRunningTime="2026-01-31 16:47:46.708273885 +0000 UTC m=+1053.514330801" watchObservedRunningTime="2026-01-31 16:47:50.182692782 +0000 UTC m=+1056.988749698" Jan 31 16:47:50 crc kubenswrapper[4730]: I0131 16:47:50.197062 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 16:47:50 crc kubenswrapper[4730]: I0131 16:47:50.658368 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:47:50 crc kubenswrapper[4730]: I0131 16:47:50.710228 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 16:47:50 crc kubenswrapper[4730]: I0131 16:47:50.710281 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 16:47:50 crc kubenswrapper[4730]: I0131 16:47:50.950504 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 16:47:50 crc kubenswrapper[4730]: I0131 16:47:50.950765 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 16:47:50 crc kubenswrapper[4730]: I0131 16:47:50.980334 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 16:47:51 crc kubenswrapper[4730]: I0131 16:47:51.025010 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 16:47:51 crc kubenswrapper[4730]: I0131 16:47:51.657789 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:47:51 crc kubenswrapper[4730]: I0131 16:47:51.657871 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:51 crc kubenswrapper[4730]: I0131 16:47:51.658512 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"3a0da846102d23267c09424d464bd75d31e24499d0a838028b36d95521a34e92"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 16:47:51 crc kubenswrapper[4730]: I0131 16:47:51.658530 4730 scope.go:117] "RemoveContainer" containerID="14c8cd1386a4bfd252f26bfcd129b0d347728b89bfbf7f1420a214ce4f84f868" Jan 31 16:47:51 crc kubenswrapper[4730]: I0131 16:47:51.658550 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://3a0da846102d23267c09424d464bd75d31e24499d0a838028b36d95521a34e92" gracePeriod=30 Jan 31 16:47:51 crc kubenswrapper[4730]: I0131 16:47:51.666107 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:47:51 crc kubenswrapper[4730]: I0131 16:47:51.717010 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 16:47:51 crc kubenswrapper[4730]: I0131 16:47:51.717281 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 16:47:52 crc kubenswrapper[4730]: E0131 16:47:52.122024 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.590251 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.727889 4730 generic.go:334] "Generic (PLEG): container finished" podID="bc778499-6ac0-402f-865d-64323285c0dd" containerID="9e4cfa7b64426a6586f64a94371048feb0957a6755e037b55addb3c985587bd1" exitCode=0 Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.727967 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.727987 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc778499-6ac0-402f-865d-64323285c0dd","Type":"ContainerDied","Data":"9e4cfa7b64426a6586f64a94371048feb0957a6755e037b55addb3c985587bd1"} Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.729007 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc778499-6ac0-402f-865d-64323285c0dd","Type":"ContainerDied","Data":"a9ed679a10e4b7178edd735e00b887b182f42be83c78b1429bbd97e5a4b60835"} Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.729037 4730 scope.go:117] "RemoveContainer" containerID="82b99ef679cbd5c79579342569105c4c53dd42057455e4a0fc59c8d7f8df296d" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.731540 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="3a0da846102d23267c09424d464bd75d31e24499d0a838028b36d95521a34e92" exitCode=0 Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.731836 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"3a0da846102d23267c09424d464bd75d31e24499d0a838028b36d95521a34e92"} Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.731886 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"e41232a60f932a62d2c5b9d50e9136223d043e8df15499b24ac0f32e2a9687f5"} Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.732079 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.732507 4730 scope.go:117] "RemoveContainer" containerID="14c8cd1386a4bfd252f26bfcd129b0d347728b89bfbf7f1420a214ce4f84f868" Jan 31 16:47:52 crc kubenswrapper[4730]: E0131 16:47:52.732694 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.748605 4730 scope.go:117] "RemoveContainer" containerID="2f5e6cb32e8c991b9e26a8738c6dfae87985ef057badb02cdd495384174c3cbd" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.755970 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-sg-core-conf-yaml\") pod \"bc778499-6ac0-402f-865d-64323285c0dd\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.756118 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tznf\" (UniqueName: \"kubernetes.io/projected/bc778499-6ac0-402f-865d-64323285c0dd-kube-api-access-6tznf\") pod \"bc778499-6ac0-402f-865d-64323285c0dd\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.756180 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-scripts\") pod \"bc778499-6ac0-402f-865d-64323285c0dd\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.756199 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-config-data\") pod \"bc778499-6ac0-402f-865d-64323285c0dd\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.756253 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc778499-6ac0-402f-865d-64323285c0dd-log-httpd\") pod \"bc778499-6ac0-402f-865d-64323285c0dd\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.756294 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc778499-6ac0-402f-865d-64323285c0dd-run-httpd\") pod \"bc778499-6ac0-402f-865d-64323285c0dd\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.756320 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-combined-ca-bundle\") pod \"bc778499-6ac0-402f-865d-64323285c0dd\" (UID: \"bc778499-6ac0-402f-865d-64323285c0dd\") " Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.757062 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc778499-6ac0-402f-865d-64323285c0dd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bc778499-6ac0-402f-865d-64323285c0dd" (UID: "bc778499-6ac0-402f-865d-64323285c0dd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.757338 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc778499-6ac0-402f-865d-64323285c0dd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bc778499-6ac0-402f-865d-64323285c0dd" (UID: "bc778499-6ac0-402f-865d-64323285c0dd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.766590 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc778499-6ac0-402f-865d-64323285c0dd-kube-api-access-6tznf" (OuterVolumeSpecName: "kube-api-access-6tznf") pod "bc778499-6ac0-402f-865d-64323285c0dd" (UID: "bc778499-6ac0-402f-865d-64323285c0dd"). InnerVolumeSpecName "kube-api-access-6tznf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.769271 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-scripts" (OuterVolumeSpecName: "scripts") pod "bc778499-6ac0-402f-865d-64323285c0dd" (UID: "bc778499-6ac0-402f-865d-64323285c0dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.774913 4730 scope.go:117] "RemoveContainer" containerID="08c03869ae7ec534af1e331a95f295ebb5c18e3d9bc6bfbf9c482458e8103995" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.794631 4730 scope.go:117] "RemoveContainer" containerID="9e4cfa7b64426a6586f64a94371048feb0957a6755e037b55addb3c985587bd1" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.809344 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bc778499-6ac0-402f-865d-64323285c0dd" (UID: "bc778499-6ac0-402f-865d-64323285c0dd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.816924 4730 scope.go:117] "RemoveContainer" containerID="82b99ef679cbd5c79579342569105c4c53dd42057455e4a0fc59c8d7f8df296d" Jan 31 16:47:52 crc kubenswrapper[4730]: E0131 16:47:52.817404 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82b99ef679cbd5c79579342569105c4c53dd42057455e4a0fc59c8d7f8df296d\": container with ID starting with 82b99ef679cbd5c79579342569105c4c53dd42057455e4a0fc59c8d7f8df296d not found: ID does not exist" containerID="82b99ef679cbd5c79579342569105c4c53dd42057455e4a0fc59c8d7f8df296d" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.817467 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82b99ef679cbd5c79579342569105c4c53dd42057455e4a0fc59c8d7f8df296d"} err="failed to get container status \"82b99ef679cbd5c79579342569105c4c53dd42057455e4a0fc59c8d7f8df296d\": rpc error: code = NotFound desc = could not find container \"82b99ef679cbd5c79579342569105c4c53dd42057455e4a0fc59c8d7f8df296d\": container with ID starting with 82b99ef679cbd5c79579342569105c4c53dd42057455e4a0fc59c8d7f8df296d not found: ID does not exist" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.817501 4730 scope.go:117] "RemoveContainer" containerID="2f5e6cb32e8c991b9e26a8738c6dfae87985ef057badb02cdd495384174c3cbd" Jan 31 16:47:52 crc kubenswrapper[4730]: E0131 16:47:52.817919 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5e6cb32e8c991b9e26a8738c6dfae87985ef057badb02cdd495384174c3cbd\": container with ID starting with 2f5e6cb32e8c991b9e26a8738c6dfae87985ef057badb02cdd495384174c3cbd not found: ID does not exist" containerID="2f5e6cb32e8c991b9e26a8738c6dfae87985ef057badb02cdd495384174c3cbd" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.817976 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5e6cb32e8c991b9e26a8738c6dfae87985ef057badb02cdd495384174c3cbd"} err="failed to get container status \"2f5e6cb32e8c991b9e26a8738c6dfae87985ef057badb02cdd495384174c3cbd\": rpc error: code = NotFound desc = could not find container \"2f5e6cb32e8c991b9e26a8738c6dfae87985ef057badb02cdd495384174c3cbd\": container with ID starting with 2f5e6cb32e8c991b9e26a8738c6dfae87985ef057badb02cdd495384174c3cbd not found: ID does not exist" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.818006 4730 scope.go:117] "RemoveContainer" containerID="08c03869ae7ec534af1e331a95f295ebb5c18e3d9bc6bfbf9c482458e8103995" Jan 31 16:47:52 crc kubenswrapper[4730]: E0131 16:47:52.818330 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08c03869ae7ec534af1e331a95f295ebb5c18e3d9bc6bfbf9c482458e8103995\": container with ID starting with 08c03869ae7ec534af1e331a95f295ebb5c18e3d9bc6bfbf9c482458e8103995 not found: ID does not exist" containerID="08c03869ae7ec534af1e331a95f295ebb5c18e3d9bc6bfbf9c482458e8103995" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.818419 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08c03869ae7ec534af1e331a95f295ebb5c18e3d9bc6bfbf9c482458e8103995"} err="failed to get container status \"08c03869ae7ec534af1e331a95f295ebb5c18e3d9bc6bfbf9c482458e8103995\": rpc error: code = NotFound desc = could not find container \"08c03869ae7ec534af1e331a95f295ebb5c18e3d9bc6bfbf9c482458e8103995\": container with ID starting with 08c03869ae7ec534af1e331a95f295ebb5c18e3d9bc6bfbf9c482458e8103995 not found: ID does not exist" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.818504 4730 scope.go:117] "RemoveContainer" containerID="9e4cfa7b64426a6586f64a94371048feb0957a6755e037b55addb3c985587bd1" Jan 31 16:47:52 crc kubenswrapper[4730]: E0131 16:47:52.818961 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e4cfa7b64426a6586f64a94371048feb0957a6755e037b55addb3c985587bd1\": container with ID starting with 9e4cfa7b64426a6586f64a94371048feb0957a6755e037b55addb3c985587bd1 not found: ID does not exist" containerID="9e4cfa7b64426a6586f64a94371048feb0957a6755e037b55addb3c985587bd1" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.819010 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e4cfa7b64426a6586f64a94371048feb0957a6755e037b55addb3c985587bd1"} err="failed to get container status \"9e4cfa7b64426a6586f64a94371048feb0957a6755e037b55addb3c985587bd1\": rpc error: code = NotFound desc = could not find container \"9e4cfa7b64426a6586f64a94371048feb0957a6755e037b55addb3c985587bd1\": container with ID starting with 9e4cfa7b64426a6586f64a94371048feb0957a6755e037b55addb3c985587bd1 not found: ID does not exist" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.858713 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tznf\" (UniqueName: \"kubernetes.io/projected/bc778499-6ac0-402f-865d-64323285c0dd-kube-api-access-6tznf\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.858736 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.858745 4730 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc778499-6ac0-402f-865d-64323285c0dd-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.858756 4730 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc778499-6ac0-402f-865d-64323285c0dd-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.858765 4730 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.860859 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc778499-6ac0-402f-865d-64323285c0dd" (UID: "bc778499-6ac0-402f-865d-64323285c0dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.919159 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-config-data" (OuterVolumeSpecName: "config-data") pod "bc778499-6ac0-402f-865d-64323285c0dd" (UID: "bc778499-6ac0-402f-865d-64323285c0dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.959923 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.960177 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc778499-6ac0-402f-865d-64323285c0dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.970727 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.970866 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 16:47:52 crc kubenswrapper[4730]: I0131 16:47:52.973865 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.093034 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.105909 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.133661 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:47:53 crc kubenswrapper[4730]: E0131 16:47:53.134056 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc778499-6ac0-402f-865d-64323285c0dd" containerName="ceilometer-notification-agent" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.134073 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc778499-6ac0-402f-865d-64323285c0dd" containerName="ceilometer-notification-agent" Jan 31 16:47:53 crc kubenswrapper[4730]: E0131 16:47:53.134088 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc778499-6ac0-402f-865d-64323285c0dd" containerName="sg-core" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.134094 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc778499-6ac0-402f-865d-64323285c0dd" containerName="sg-core" Jan 31 16:47:53 crc kubenswrapper[4730]: E0131 16:47:53.134108 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc778499-6ac0-402f-865d-64323285c0dd" containerName="ceilometer-central-agent" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.134114 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc778499-6ac0-402f-865d-64323285c0dd" containerName="ceilometer-central-agent" Jan 31 16:47:53 crc kubenswrapper[4730]: E0131 16:47:53.134127 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd" containerName="neutron-httpd" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.134133 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd" containerName="neutron-httpd" Jan 31 16:47:53 crc kubenswrapper[4730]: E0131 16:47:53.134190 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc778499-6ac0-402f-865d-64323285c0dd" containerName="proxy-httpd" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.134199 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc778499-6ac0-402f-865d-64323285c0dd" containerName="proxy-httpd" Jan 31 16:47:53 crc kubenswrapper[4730]: E0131 16:47:53.134215 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd" containerName="neutron-api" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.134221 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd" containerName="neutron-api" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.134379 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd" containerName="neutron-api" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.134394 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc778499-6ac0-402f-865d-64323285c0dd" containerName="ceilometer-central-agent" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.134403 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc778499-6ac0-402f-865d-64323285c0dd" containerName="sg-core" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.134416 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc778499-6ac0-402f-865d-64323285c0dd" containerName="ceilometer-notification-agent" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.134427 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc778499-6ac0-402f-865d-64323285c0dd" containerName="proxy-httpd" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.134440 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a9c06b-a7ce-4f27-97d9-fafb4b70f1dd" containerName="neutron-httpd" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.135880 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.140700 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.142162 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.163358 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.270019 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c7769-c372-44e9-a498-0081d8722c44-log-httpd\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.270065 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.270095 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.270115 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-scripts\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.270157 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-config-data\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.270180 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mglvw\" (UniqueName: \"kubernetes.io/projected/c47c7769-c372-44e9-a498-0081d8722c44-kube-api-access-mglvw\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.270232 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c7769-c372-44e9-a498-0081d8722c44-run-httpd\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.372183 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c7769-c372-44e9-a498-0081d8722c44-run-httpd\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.372282 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c7769-c372-44e9-a498-0081d8722c44-log-httpd\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.372298 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.372320 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.372357 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-scripts\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.372396 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-config-data\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.372416 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mglvw\" (UniqueName: \"kubernetes.io/projected/c47c7769-c372-44e9-a498-0081d8722c44-kube-api-access-mglvw\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.373135 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c7769-c372-44e9-a498-0081d8722c44-run-httpd\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.373380 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c7769-c372-44e9-a498-0081d8722c44-log-httpd\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.377343 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-scripts\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.377958 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.378388 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.389186 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-config-data\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.407263 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mglvw\" (UniqueName: \"kubernetes.io/projected/c47c7769-c372-44e9-a498-0081d8722c44-kube-api-access-mglvw\") pod \"ceilometer-0\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.457472 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.748155 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.748459 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.748898 4730 scope.go:117] "RemoveContainer" containerID="14c8cd1386a4bfd252f26bfcd129b0d347728b89bfbf7f1420a214ce4f84f868" Jan 31 16:47:53 crc kubenswrapper[4730]: E0131 16:47:53.749103 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:47:53 crc kubenswrapper[4730]: I0131 16:47:53.966604 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:47:54 crc kubenswrapper[4730]: I0131 16:47:54.058195 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 16:47:54 crc kubenswrapper[4730]: I0131 16:47:54.105461 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 16:47:54 crc kubenswrapper[4730]: I0131 16:47:54.473211 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc778499-6ac0-402f-865d-64323285c0dd" path="/var/lib/kubelet/pods/bc778499-6ac0-402f-865d-64323285c0dd/volumes" Jan 31 16:47:54 crc kubenswrapper[4730]: I0131 16:47:54.756721 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c7769-c372-44e9-a498-0081d8722c44","Type":"ContainerStarted","Data":"382b30ca8ae4397ab3331c3876c3c3a7ed888b5339b2bbf54004b615123d9f1f"} Jan 31 16:47:54 crc kubenswrapper[4730]: I0131 16:47:54.864685 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4b7f-account-create-update-vk2h2"] Jan 31 16:47:54 crc kubenswrapper[4730]: I0131 16:47:54.866063 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4b7f-account-create-update-vk2h2" Jan 31 16:47:54 crc kubenswrapper[4730]: I0131 16:47:54.871853 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 31 16:47:54 crc kubenswrapper[4730]: I0131 16:47:54.885380 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-99f6t"] Jan 31 16:47:54 crc kubenswrapper[4730]: I0131 16:47:54.886568 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-99f6t" Jan 31 16:47:54 crc kubenswrapper[4730]: I0131 16:47:54.908245 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-jqxt9"] Jan 31 16:47:54 crc kubenswrapper[4730]: I0131 16:47:54.910189 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jqxt9" Jan 31 16:47:54 crc kubenswrapper[4730]: I0131 16:47:54.948665 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4b7f-account-create-update-vk2h2"] Jan 31 16:47:54 crc kubenswrapper[4730]: I0131 16:47:54.961814 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jqxt9"] Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.002096 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v57x\" (UniqueName: \"kubernetes.io/projected/723811c5-3b5b-4e22-806c-682826895b32-kube-api-access-7v57x\") pod \"nova-api-4b7f-account-create-update-vk2h2\" (UID: \"723811c5-3b5b-4e22-806c-682826895b32\") " pod="openstack/nova-api-4b7f-account-create-update-vk2h2" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.002150 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bfad8d5-bd15-41a8-858c-ffd981537c79-operator-scripts\") pod \"nova-cell0-db-create-99f6t\" (UID: \"2bfad8d5-bd15-41a8-858c-ffd981537c79\") " pod="openstack/nova-cell0-db-create-99f6t" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.002189 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nv4h\" (UniqueName: \"kubernetes.io/projected/2bfad8d5-bd15-41a8-858c-ffd981537c79-kube-api-access-8nv4h\") pod \"nova-cell0-db-create-99f6t\" (UID: \"2bfad8d5-bd15-41a8-858c-ffd981537c79\") " pod="openstack/nova-cell0-db-create-99f6t" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.002251 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/723811c5-3b5b-4e22-806c-682826895b32-operator-scripts\") pod \"nova-api-4b7f-account-create-update-vk2h2\" (UID: \"723811c5-3b5b-4e22-806c-682826895b32\") " pod="openstack/nova-api-4b7f-account-create-update-vk2h2" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.002304 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cxrf\" (UniqueName: \"kubernetes.io/projected/5934f8bc-1134-40af-8af2-57ffcbfddda3-kube-api-access-7cxrf\") pod \"nova-api-db-create-jqxt9\" (UID: \"5934f8bc-1134-40af-8af2-57ffcbfddda3\") " pod="openstack/nova-api-db-create-jqxt9" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.002621 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5934f8bc-1134-40af-8af2-57ffcbfddda3-operator-scripts\") pod \"nova-api-db-create-jqxt9\" (UID: \"5934f8bc-1134-40af-8af2-57ffcbfddda3\") " pod="openstack/nova-api-db-create-jqxt9" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.017208 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-99f6t"] Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.056453 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7lgq6"] Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.057887 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7lgq6" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.063502 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7lgq6"] Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.074253 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-72f6-account-create-update-b8qp4"] Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.084068 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-72f6-account-create-update-b8qp4" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.085928 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-72f6-account-create-update-b8qp4"] Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.086252 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.106260 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5934f8bc-1134-40af-8af2-57ffcbfddda3-operator-scripts\") pod \"nova-api-db-create-jqxt9\" (UID: \"5934f8bc-1134-40af-8af2-57ffcbfddda3\") " pod="openstack/nova-api-db-create-jqxt9" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.106377 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v57x\" (UniqueName: \"kubernetes.io/projected/723811c5-3b5b-4e22-806c-682826895b32-kube-api-access-7v57x\") pod \"nova-api-4b7f-account-create-update-vk2h2\" (UID: \"723811c5-3b5b-4e22-806c-682826895b32\") " pod="openstack/nova-api-4b7f-account-create-update-vk2h2" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.106416 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bfad8d5-bd15-41a8-858c-ffd981537c79-operator-scripts\") pod \"nova-cell0-db-create-99f6t\" (UID: \"2bfad8d5-bd15-41a8-858c-ffd981537c79\") " pod="openstack/nova-cell0-db-create-99f6t" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.106449 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nv4h\" (UniqueName: \"kubernetes.io/projected/2bfad8d5-bd15-41a8-858c-ffd981537c79-kube-api-access-8nv4h\") pod \"nova-cell0-db-create-99f6t\" (UID: \"2bfad8d5-bd15-41a8-858c-ffd981537c79\") " pod="openstack/nova-cell0-db-create-99f6t" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.106474 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/723811c5-3b5b-4e22-806c-682826895b32-operator-scripts\") pod \"nova-api-4b7f-account-create-update-vk2h2\" (UID: \"723811c5-3b5b-4e22-806c-682826895b32\") " pod="openstack/nova-api-4b7f-account-create-update-vk2h2" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.106494 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cxrf\" (UniqueName: \"kubernetes.io/projected/5934f8bc-1134-40af-8af2-57ffcbfddda3-kube-api-access-7cxrf\") pod \"nova-api-db-create-jqxt9\" (UID: \"5934f8bc-1134-40af-8af2-57ffcbfddda3\") " pod="openstack/nova-api-db-create-jqxt9" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.107953 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/723811c5-3b5b-4e22-806c-682826895b32-operator-scripts\") pod \"nova-api-4b7f-account-create-update-vk2h2\" (UID: \"723811c5-3b5b-4e22-806c-682826895b32\") " pod="openstack/nova-api-4b7f-account-create-update-vk2h2" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.108438 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5934f8bc-1134-40af-8af2-57ffcbfddda3-operator-scripts\") pod \"nova-api-db-create-jqxt9\" (UID: \"5934f8bc-1134-40af-8af2-57ffcbfddda3\") " pod="openstack/nova-api-db-create-jqxt9" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.120265 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bfad8d5-bd15-41a8-858c-ffd981537c79-operator-scripts\") pod \"nova-cell0-db-create-99f6t\" (UID: \"2bfad8d5-bd15-41a8-858c-ffd981537c79\") " pod="openstack/nova-cell0-db-create-99f6t" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.125933 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cxrf\" (UniqueName: \"kubernetes.io/projected/5934f8bc-1134-40af-8af2-57ffcbfddda3-kube-api-access-7cxrf\") pod \"nova-api-db-create-jqxt9\" (UID: \"5934f8bc-1134-40af-8af2-57ffcbfddda3\") " pod="openstack/nova-api-db-create-jqxt9" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.126990 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nv4h\" (UniqueName: \"kubernetes.io/projected/2bfad8d5-bd15-41a8-858c-ffd981537c79-kube-api-access-8nv4h\") pod \"nova-cell0-db-create-99f6t\" (UID: \"2bfad8d5-bd15-41a8-858c-ffd981537c79\") " pod="openstack/nova-cell0-db-create-99f6t" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.145499 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v57x\" (UniqueName: \"kubernetes.io/projected/723811c5-3b5b-4e22-806c-682826895b32-kube-api-access-7v57x\") pod \"nova-api-4b7f-account-create-update-vk2h2\" (UID: \"723811c5-3b5b-4e22-806c-682826895b32\") " pod="openstack/nova-api-4b7f-account-create-update-vk2h2" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.185713 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4b7f-account-create-update-vk2h2" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.215405 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-af11-account-create-update-sxvz7"] Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.216113 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f52f18ff-5693-4ec1-ba5d-9df137257c40-operator-scripts\") pod \"nova-cell0-72f6-account-create-update-b8qp4\" (UID: \"f52f18ff-5693-4ec1-ba5d-9df137257c40\") " pod="openstack/nova-cell0-72f6-account-create-update-b8qp4" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.216224 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frdnc\" (UniqueName: \"kubernetes.io/projected/f52f18ff-5693-4ec1-ba5d-9df137257c40-kube-api-access-frdnc\") pod \"nova-cell0-72f6-account-create-update-b8qp4\" (UID: \"f52f18ff-5693-4ec1-ba5d-9df137257c40\") " pod="openstack/nova-cell0-72f6-account-create-update-b8qp4" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.216306 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwk9f\" (UniqueName: \"kubernetes.io/projected/05a13f5b-ba5a-4fe2-b395-29562d21fd40-kube-api-access-cwk9f\") pod \"nova-cell1-db-create-7lgq6\" (UID: \"05a13f5b-ba5a-4fe2-b395-29562d21fd40\") " pod="openstack/nova-cell1-db-create-7lgq6" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.216382 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05a13f5b-ba5a-4fe2-b395-29562d21fd40-operator-scripts\") pod \"nova-cell1-db-create-7lgq6\" (UID: \"05a13f5b-ba5a-4fe2-b395-29562d21fd40\") " pod="openstack/nova-cell1-db-create-7lgq6" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.216923 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-af11-account-create-update-sxvz7" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.220083 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.242868 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-af11-account-create-update-sxvz7"] Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.256851 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jqxt9" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.287863 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-99f6t" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.324299 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4f85271-c4d1-43fe-95ad-b88443d14a9a-operator-scripts\") pod \"nova-cell1-af11-account-create-update-sxvz7\" (UID: \"e4f85271-c4d1-43fe-95ad-b88443d14a9a\") " pod="openstack/nova-cell1-af11-account-create-update-sxvz7" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.324463 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg6w4\" (UniqueName: \"kubernetes.io/projected/e4f85271-c4d1-43fe-95ad-b88443d14a9a-kube-api-access-rg6w4\") pod \"nova-cell1-af11-account-create-update-sxvz7\" (UID: \"e4f85271-c4d1-43fe-95ad-b88443d14a9a\") " pod="openstack/nova-cell1-af11-account-create-update-sxvz7" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.324578 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f52f18ff-5693-4ec1-ba5d-9df137257c40-operator-scripts\") pod \"nova-cell0-72f6-account-create-update-b8qp4\" (UID: \"f52f18ff-5693-4ec1-ba5d-9df137257c40\") " pod="openstack/nova-cell0-72f6-account-create-update-b8qp4" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.324640 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frdnc\" (UniqueName: \"kubernetes.io/projected/f52f18ff-5693-4ec1-ba5d-9df137257c40-kube-api-access-frdnc\") pod \"nova-cell0-72f6-account-create-update-b8qp4\" (UID: \"f52f18ff-5693-4ec1-ba5d-9df137257c40\") " pod="openstack/nova-cell0-72f6-account-create-update-b8qp4" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.324674 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwk9f\" (UniqueName: \"kubernetes.io/projected/05a13f5b-ba5a-4fe2-b395-29562d21fd40-kube-api-access-cwk9f\") pod \"nova-cell1-db-create-7lgq6\" (UID: \"05a13f5b-ba5a-4fe2-b395-29562d21fd40\") " pod="openstack/nova-cell1-db-create-7lgq6" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.325405 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f52f18ff-5693-4ec1-ba5d-9df137257c40-operator-scripts\") pod \"nova-cell0-72f6-account-create-update-b8qp4\" (UID: \"f52f18ff-5693-4ec1-ba5d-9df137257c40\") " pod="openstack/nova-cell0-72f6-account-create-update-b8qp4" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.325460 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05a13f5b-ba5a-4fe2-b395-29562d21fd40-operator-scripts\") pod \"nova-cell1-db-create-7lgq6\" (UID: \"05a13f5b-ba5a-4fe2-b395-29562d21fd40\") " pod="openstack/nova-cell1-db-create-7lgq6" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.326479 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05a13f5b-ba5a-4fe2-b395-29562d21fd40-operator-scripts\") pod \"nova-cell1-db-create-7lgq6\" (UID: \"05a13f5b-ba5a-4fe2-b395-29562d21fd40\") " pod="openstack/nova-cell1-db-create-7lgq6" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.352768 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwk9f\" (UniqueName: \"kubernetes.io/projected/05a13f5b-ba5a-4fe2-b395-29562d21fd40-kube-api-access-cwk9f\") pod \"nova-cell1-db-create-7lgq6\" (UID: \"05a13f5b-ba5a-4fe2-b395-29562d21fd40\") " pod="openstack/nova-cell1-db-create-7lgq6" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.358460 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frdnc\" (UniqueName: \"kubernetes.io/projected/f52f18ff-5693-4ec1-ba5d-9df137257c40-kube-api-access-frdnc\") pod \"nova-cell0-72f6-account-create-update-b8qp4\" (UID: \"f52f18ff-5693-4ec1-ba5d-9df137257c40\") " pod="openstack/nova-cell0-72f6-account-create-update-b8qp4" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.401144 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7lgq6" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.423119 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-72f6-account-create-update-b8qp4" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.428054 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4f85271-c4d1-43fe-95ad-b88443d14a9a-operator-scripts\") pod \"nova-cell1-af11-account-create-update-sxvz7\" (UID: \"e4f85271-c4d1-43fe-95ad-b88443d14a9a\") " pod="openstack/nova-cell1-af11-account-create-update-sxvz7" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.428187 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg6w4\" (UniqueName: \"kubernetes.io/projected/e4f85271-c4d1-43fe-95ad-b88443d14a9a-kube-api-access-rg6w4\") pod \"nova-cell1-af11-account-create-update-sxvz7\" (UID: \"e4f85271-c4d1-43fe-95ad-b88443d14a9a\") " pod="openstack/nova-cell1-af11-account-create-update-sxvz7" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.429749 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4f85271-c4d1-43fe-95ad-b88443d14a9a-operator-scripts\") pod \"nova-cell1-af11-account-create-update-sxvz7\" (UID: \"e4f85271-c4d1-43fe-95ad-b88443d14a9a\") " pod="openstack/nova-cell1-af11-account-create-update-sxvz7" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.455543 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg6w4\" (UniqueName: \"kubernetes.io/projected/e4f85271-c4d1-43fe-95ad-b88443d14a9a-kube-api-access-rg6w4\") pod \"nova-cell1-af11-account-create-update-sxvz7\" (UID: \"e4f85271-c4d1-43fe-95ad-b88443d14a9a\") " pod="openstack/nova-cell1-af11-account-create-update-sxvz7" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.653515 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-af11-account-create-update-sxvz7" Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.813660 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c7769-c372-44e9-a498-0081d8722c44","Type":"ContainerStarted","Data":"b66b6c134d271e018e20d9c8105712cbbd8c236175728084afae442cafed5b20"} Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.887905 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4b7f-account-create-update-vk2h2"] Jan 31 16:47:55 crc kubenswrapper[4730]: I0131 16:47:55.949651 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jqxt9"] Jan 31 16:47:55 crc kubenswrapper[4730]: W0131 16:47:55.971208 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5934f8bc_1134_40af_8af2_57ffcbfddda3.slice/crio-a050bd5484cd4ed27d86925a8c2d78edcad2655f24137be859d576e068be4403 WatchSource:0}: Error finding container a050bd5484cd4ed27d86925a8c2d78edcad2655f24137be859d576e068be4403: Status 404 returned error can't find the container with id a050bd5484cd4ed27d86925a8c2d78edcad2655f24137be859d576e068be4403 Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.057366 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-72f6-account-create-update-b8qp4"] Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.077779 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-99f6t"] Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.183353 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7lgq6"] Jan 31 16:47:56 crc kubenswrapper[4730]: W0131 16:47:56.207085 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05a13f5b_ba5a_4fe2_b395_29562d21fd40.slice/crio-acdb2b4a8dd49d068730b4ac214c90af66dad714a5e972b69e693d2fdccf510f WatchSource:0}: Error finding container acdb2b4a8dd49d068730b4ac214c90af66dad714a5e972b69e693d2fdccf510f: Status 404 returned error can't find the container with id acdb2b4a8dd49d068730b4ac214c90af66dad714a5e972b69e693d2fdccf510f Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.467260 4730 scope.go:117] "RemoveContainer" containerID="e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c" Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.467526 4730 scope.go:117] "RemoveContainer" containerID="fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4" Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.467622 4730 scope.go:117] "RemoveContainer" containerID="78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff" Jan 31 16:47:56 crc kubenswrapper[4730]: E0131 16:47:56.467964 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.497651 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-af11-account-create-update-sxvz7"] Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.621244 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7788464654-cr95d" podUID="0374cd2d-1d23-4f00-893a-278af887d99b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.738014 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-b5bd455fb-h66br" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.851184 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-72f6-account-create-update-b8qp4" event={"ID":"f52f18ff-5693-4ec1-ba5d-9df137257c40","Type":"ContainerStarted","Data":"5131ef244154a0a2e7c22c81b42de30262955196c77bfe00d7723e7fcde9b2a5"} Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.851237 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-72f6-account-create-update-b8qp4" event={"ID":"f52f18ff-5693-4ec1-ba5d-9df137257c40","Type":"ContainerStarted","Data":"d5f2464015498d5fcf0ee89b35daaa7701c35b98332d4d92c914f2c27288a606"} Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.862481 4730 generic.go:334] "Generic (PLEG): container finished" podID="723811c5-3b5b-4e22-806c-682826895b32" containerID="c1dbdba61f3503c6ddaa4f2e3c04bddba0ce40a074719da2a57ebb8ff80b9ce9" exitCode=0 Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.862588 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4b7f-account-create-update-vk2h2" event={"ID":"723811c5-3b5b-4e22-806c-682826895b32","Type":"ContainerDied","Data":"c1dbdba61f3503c6ddaa4f2e3c04bddba0ce40a074719da2a57ebb8ff80b9ce9"} Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.862624 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4b7f-account-create-update-vk2h2" event={"ID":"723811c5-3b5b-4e22-806c-682826895b32","Type":"ContainerStarted","Data":"9ed0086f522c19842a21c19916ac0702eaaac14643846b5b9f8a013b9bf4b07f"} Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.870247 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jqxt9" event={"ID":"5934f8bc-1134-40af-8af2-57ffcbfddda3","Type":"ContainerStarted","Data":"af49d33e2b53192a139e2aa279b7240d4161610f4bc8fe6866dadbaa822c8ede"} Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.870286 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jqxt9" event={"ID":"5934f8bc-1134-40af-8af2-57ffcbfddda3","Type":"ContainerStarted","Data":"a050bd5484cd4ed27d86925a8c2d78edcad2655f24137be859d576e068be4403"} Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.870840 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-72f6-account-create-update-b8qp4" podStartSLOduration=1.8708245190000001 podStartE2EDuration="1.870824519s" podCreationTimestamp="2026-01-31 16:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:47:56.869000714 +0000 UTC m=+1063.675057630" watchObservedRunningTime="2026-01-31 16:47:56.870824519 +0000 UTC m=+1063.676881425" Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.874731 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-99f6t" event={"ID":"2bfad8d5-bd15-41a8-858c-ffd981537c79","Type":"ContainerStarted","Data":"7baa455b583bf8932473b82c023ae9a2b5b3176cf6c0c8036213ff16f646471b"} Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.874847 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-99f6t" event={"ID":"2bfad8d5-bd15-41a8-858c-ffd981537c79","Type":"ContainerStarted","Data":"adc43e52bc9d98ed19859f223157dacd5dd43561459507efc2b8d3a594f301be"} Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.881705 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c7769-c372-44e9-a498-0081d8722c44","Type":"ContainerStarted","Data":"c010f7f6be66256a7949e2f1609b29a8035616a556839c086035a5d7043af72d"} Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.884825 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7lgq6" event={"ID":"05a13f5b-ba5a-4fe2-b395-29562d21fd40","Type":"ContainerStarted","Data":"a5db78471750fc731b4c8a042342459fc01a554a2b2f2aa60de2e00220da9925"} Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.884851 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7lgq6" event={"ID":"05a13f5b-ba5a-4fe2-b395-29562d21fd40","Type":"ContainerStarted","Data":"acdb2b4a8dd49d068730b4ac214c90af66dad714a5e972b69e693d2fdccf510f"} Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.888743 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-af11-account-create-update-sxvz7" event={"ID":"e4f85271-c4d1-43fe-95ad-b88443d14a9a","Type":"ContainerStarted","Data":"969db365e982ca78a8b274abc63fb16baa4ac0310c2b7a6f82a570b4b8128bab"} Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.888858 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-af11-account-create-update-sxvz7" event={"ID":"e4f85271-c4d1-43fe-95ad-b88443d14a9a","Type":"ContainerStarted","Data":"3f873b5b6034932548c67f1ef641fbb6df0a56cb58cae5689535e1cf2be40dd0"} Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.906028 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-jqxt9" podStartSLOduration=2.906008491 podStartE2EDuration="2.906008491s" podCreationTimestamp="2026-01-31 16:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:47:56.905402966 +0000 UTC m=+1063.711459882" watchObservedRunningTime="2026-01-31 16:47:56.906008491 +0000 UTC m=+1063.712065407" Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.929420 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-af11-account-create-update-sxvz7" podStartSLOduration=1.929401848 podStartE2EDuration="1.929401848s" podCreationTimestamp="2026-01-31 16:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:47:56.928736522 +0000 UTC m=+1063.734793438" watchObservedRunningTime="2026-01-31 16:47:56.929401848 +0000 UTC m=+1063.735458764" Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.956936 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-7lgq6" podStartSLOduration=2.9569181540000002 podStartE2EDuration="2.956918154s" podCreationTimestamp="2026-01-31 16:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:47:56.946123153 +0000 UTC m=+1063.752180069" watchObservedRunningTime="2026-01-31 16:47:56.956918154 +0000 UTC m=+1063.762975070" Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.967716 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-99f6t" podStartSLOduration=2.967700146 podStartE2EDuration="2.967700146s" podCreationTimestamp="2026-01-31 16:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:47:56.9629102 +0000 UTC m=+1063.768967116" watchObservedRunningTime="2026-01-31 16:47:56.967700146 +0000 UTC m=+1063.773757062" Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.975407 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.975510 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.975600 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.976283 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9edfe6ca891dac90613c7fe072627dce26dbef80751209cf3e40ccba97010f80"} pod="openshift-machine-config-operator/machine-config-daemon-mzg47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 16:47:56 crc kubenswrapper[4730]: I0131 16:47:56.976425 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" containerID="cri-o://9edfe6ca891dac90613c7fe072627dce26dbef80751209cf3e40ccba97010f80" gracePeriod=600 Jan 31 16:47:57 crc kubenswrapper[4730]: E0131 16:47:57.486642 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4f85271_c4d1_43fe_95ad_b88443d14a9a.slice/crio-conmon-969db365e982ca78a8b274abc63fb16baa4ac0310c2b7a6f82a570b4b8128bab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4f85271_c4d1_43fe_95ad_b88443d14a9a.slice/crio-969db365e982ca78a8b274abc63fb16baa4ac0310c2b7a6f82a570b4b8128bab.scope\": RecentStats: unable to find data in memory cache]" Jan 31 16:47:57 crc kubenswrapper[4730]: I0131 16:47:57.667308 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:47:57 crc kubenswrapper[4730]: I0131 16:47:57.897650 4730 generic.go:334] "Generic (PLEG): container finished" podID="e4f85271-c4d1-43fe-95ad-b88443d14a9a" containerID="969db365e982ca78a8b274abc63fb16baa4ac0310c2b7a6f82a570b4b8128bab" exitCode=0 Jan 31 16:47:57 crc kubenswrapper[4730]: I0131 16:47:57.897707 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-af11-account-create-update-sxvz7" event={"ID":"e4f85271-c4d1-43fe-95ad-b88443d14a9a","Type":"ContainerDied","Data":"969db365e982ca78a8b274abc63fb16baa4ac0310c2b7a6f82a570b4b8128bab"} Jan 31 16:47:57 crc kubenswrapper[4730]: I0131 16:47:57.898988 4730 generic.go:334] "Generic (PLEG): container finished" podID="f52f18ff-5693-4ec1-ba5d-9df137257c40" containerID="5131ef244154a0a2e7c22c81b42de30262955196c77bfe00d7723e7fcde9b2a5" exitCode=0 Jan 31 16:47:57 crc kubenswrapper[4730]: I0131 16:47:57.899034 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-72f6-account-create-update-b8qp4" event={"ID":"f52f18ff-5693-4ec1-ba5d-9df137257c40","Type":"ContainerDied","Data":"5131ef244154a0a2e7c22c81b42de30262955196c77bfe00d7723e7fcde9b2a5"} Jan 31 16:47:57 crc kubenswrapper[4730]: I0131 16:47:57.900548 4730 generic.go:334] "Generic (PLEG): container finished" podID="5934f8bc-1134-40af-8af2-57ffcbfddda3" containerID="af49d33e2b53192a139e2aa279b7240d4161610f4bc8fe6866dadbaa822c8ede" exitCode=0 Jan 31 16:47:57 crc kubenswrapper[4730]: I0131 16:47:57.900593 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jqxt9" event={"ID":"5934f8bc-1134-40af-8af2-57ffcbfddda3","Type":"ContainerDied","Data":"af49d33e2b53192a139e2aa279b7240d4161610f4bc8fe6866dadbaa822c8ede"} Jan 31 16:47:57 crc kubenswrapper[4730]: I0131 16:47:57.903125 4730 generic.go:334] "Generic (PLEG): container finished" podID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerID="9edfe6ca891dac90613c7fe072627dce26dbef80751209cf3e40ccba97010f80" exitCode=0 Jan 31 16:47:57 crc kubenswrapper[4730]: I0131 16:47:57.903178 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerDied","Data":"9edfe6ca891dac90613c7fe072627dce26dbef80751209cf3e40ccba97010f80"} Jan 31 16:47:57 crc kubenswrapper[4730]: I0131 16:47:57.903203 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerStarted","Data":"21bc1c0d1795b476dc0a7f952823b035db816e9829905fa6afc3669ea169eecc"} Jan 31 16:47:57 crc kubenswrapper[4730]: I0131 16:47:57.903219 4730 scope.go:117] "RemoveContainer" containerID="d31bd001ee74e3469a2749b923f42adb83a31cb422ef5d9b45febe42584ea0e1" Jan 31 16:47:57 crc kubenswrapper[4730]: I0131 16:47:57.904599 4730 generic.go:334] "Generic (PLEG): container finished" podID="2bfad8d5-bd15-41a8-858c-ffd981537c79" containerID="7baa455b583bf8932473b82c023ae9a2b5b3176cf6c0c8036213ff16f646471b" exitCode=0 Jan 31 16:47:57 crc kubenswrapper[4730]: I0131 16:47:57.904673 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-99f6t" event={"ID":"2bfad8d5-bd15-41a8-858c-ffd981537c79","Type":"ContainerDied","Data":"7baa455b583bf8932473b82c023ae9a2b5b3176cf6c0c8036213ff16f646471b"} Jan 31 16:47:57 crc kubenswrapper[4730]: I0131 16:47:57.906315 4730 generic.go:334] "Generic (PLEG): container finished" podID="05a13f5b-ba5a-4fe2-b395-29562d21fd40" containerID="a5db78471750fc731b4c8a042342459fc01a554a2b2f2aa60de2e00220da9925" exitCode=0 Jan 31 16:47:57 crc kubenswrapper[4730]: I0131 16:47:57.906366 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7lgq6" event={"ID":"05a13f5b-ba5a-4fe2-b395-29562d21fd40","Type":"ContainerDied","Data":"a5db78471750fc731b4c8a042342459fc01a554a2b2f2aa60de2e00220da9925"} Jan 31 16:47:57 crc kubenswrapper[4730]: I0131 16:47:57.913344 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c7769-c372-44e9-a498-0081d8722c44","Type":"ContainerStarted","Data":"c4eaadd5feb27041b296604f845f8e37646d5b50a5be1403182ecc2abbc0c48e"} Jan 31 16:47:58 crc kubenswrapper[4730]: I0131 16:47:58.337000 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4b7f-account-create-update-vk2h2" Jan 31 16:47:58 crc kubenswrapper[4730]: I0131 16:47:58.433686 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v57x\" (UniqueName: \"kubernetes.io/projected/723811c5-3b5b-4e22-806c-682826895b32-kube-api-access-7v57x\") pod \"723811c5-3b5b-4e22-806c-682826895b32\" (UID: \"723811c5-3b5b-4e22-806c-682826895b32\") " Jan 31 16:47:58 crc kubenswrapper[4730]: I0131 16:47:58.433754 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/723811c5-3b5b-4e22-806c-682826895b32-operator-scripts\") pod \"723811c5-3b5b-4e22-806c-682826895b32\" (UID: \"723811c5-3b5b-4e22-806c-682826895b32\") " Jan 31 16:47:58 crc kubenswrapper[4730]: I0131 16:47:58.436227 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/723811c5-3b5b-4e22-806c-682826895b32-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "723811c5-3b5b-4e22-806c-682826895b32" (UID: "723811c5-3b5b-4e22-806c-682826895b32"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:47:58 crc kubenswrapper[4730]: I0131 16:47:58.443411 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/723811c5-3b5b-4e22-806c-682826895b32-kube-api-access-7v57x" (OuterVolumeSpecName: "kube-api-access-7v57x") pod "723811c5-3b5b-4e22-806c-682826895b32" (UID: "723811c5-3b5b-4e22-806c-682826895b32"). InnerVolumeSpecName "kube-api-access-7v57x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:58 crc kubenswrapper[4730]: I0131 16:47:58.536296 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v57x\" (UniqueName: \"kubernetes.io/projected/723811c5-3b5b-4e22-806c-682826895b32-kube-api-access-7v57x\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:58 crc kubenswrapper[4730]: I0131 16:47:58.536332 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/723811c5-3b5b-4e22-806c-682826895b32-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:58 crc kubenswrapper[4730]: I0131 16:47:58.922153 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4b7f-account-create-update-vk2h2" Jan 31 16:47:58 crc kubenswrapper[4730]: I0131 16:47:58.922161 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4b7f-account-create-update-vk2h2" event={"ID":"723811c5-3b5b-4e22-806c-682826895b32","Type":"ContainerDied","Data":"9ed0086f522c19842a21c19916ac0702eaaac14643846b5b9f8a013b9bf4b07f"} Jan 31 16:47:58 crc kubenswrapper[4730]: I0131 16:47:58.922575 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ed0086f522c19842a21c19916ac0702eaaac14643846b5b9f8a013b9bf4b07f" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.398334 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-72f6-account-create-update-b8qp4" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.473350 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frdnc\" (UniqueName: \"kubernetes.io/projected/f52f18ff-5693-4ec1-ba5d-9df137257c40-kube-api-access-frdnc\") pod \"f52f18ff-5693-4ec1-ba5d-9df137257c40\" (UID: \"f52f18ff-5693-4ec1-ba5d-9df137257c40\") " Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.473526 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f52f18ff-5693-4ec1-ba5d-9df137257c40-operator-scripts\") pod \"f52f18ff-5693-4ec1-ba5d-9df137257c40\" (UID: \"f52f18ff-5693-4ec1-ba5d-9df137257c40\") " Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.474529 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f52f18ff-5693-4ec1-ba5d-9df137257c40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f52f18ff-5693-4ec1-ba5d-9df137257c40" (UID: "f52f18ff-5693-4ec1-ba5d-9df137257c40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.492703 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f52f18ff-5693-4ec1-ba5d-9df137257c40-kube-api-access-frdnc" (OuterVolumeSpecName: "kube-api-access-frdnc") pod "f52f18ff-5693-4ec1-ba5d-9df137257c40" (UID: "f52f18ff-5693-4ec1-ba5d-9df137257c40"). InnerVolumeSpecName "kube-api-access-frdnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.576903 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frdnc\" (UniqueName: \"kubernetes.io/projected/f52f18ff-5693-4ec1-ba5d-9df137257c40-kube-api-access-frdnc\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.576935 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f52f18ff-5693-4ec1-ba5d-9df137257c40-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.632404 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-af11-account-create-update-sxvz7" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.639929 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-99f6t" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.646709 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7lgq6" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.652838 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jqxt9" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.779494 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bfad8d5-bd15-41a8-858c-ffd981537c79-operator-scripts\") pod \"2bfad8d5-bd15-41a8-858c-ffd981537c79\" (UID: \"2bfad8d5-bd15-41a8-858c-ffd981537c79\") " Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.779735 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05a13f5b-ba5a-4fe2-b395-29562d21fd40-operator-scripts\") pod \"05a13f5b-ba5a-4fe2-b395-29562d21fd40\" (UID: \"05a13f5b-ba5a-4fe2-b395-29562d21fd40\") " Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.779762 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg6w4\" (UniqueName: \"kubernetes.io/projected/e4f85271-c4d1-43fe-95ad-b88443d14a9a-kube-api-access-rg6w4\") pod \"e4f85271-c4d1-43fe-95ad-b88443d14a9a\" (UID: \"e4f85271-c4d1-43fe-95ad-b88443d14a9a\") " Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.779786 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwk9f\" (UniqueName: \"kubernetes.io/projected/05a13f5b-ba5a-4fe2-b395-29562d21fd40-kube-api-access-cwk9f\") pod \"05a13f5b-ba5a-4fe2-b395-29562d21fd40\" (UID: \"05a13f5b-ba5a-4fe2-b395-29562d21fd40\") " Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.779907 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4f85271-c4d1-43fe-95ad-b88443d14a9a-operator-scripts\") pod \"e4f85271-c4d1-43fe-95ad-b88443d14a9a\" (UID: \"e4f85271-c4d1-43fe-95ad-b88443d14a9a\") " Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.779950 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5934f8bc-1134-40af-8af2-57ffcbfddda3-operator-scripts\") pod \"5934f8bc-1134-40af-8af2-57ffcbfddda3\" (UID: \"5934f8bc-1134-40af-8af2-57ffcbfddda3\") " Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.780014 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cxrf\" (UniqueName: \"kubernetes.io/projected/5934f8bc-1134-40af-8af2-57ffcbfddda3-kube-api-access-7cxrf\") pod \"5934f8bc-1134-40af-8af2-57ffcbfddda3\" (UID: \"5934f8bc-1134-40af-8af2-57ffcbfddda3\") " Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.780057 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nv4h\" (UniqueName: \"kubernetes.io/projected/2bfad8d5-bd15-41a8-858c-ffd981537c79-kube-api-access-8nv4h\") pod \"2bfad8d5-bd15-41a8-858c-ffd981537c79\" (UID: \"2bfad8d5-bd15-41a8-858c-ffd981537c79\") " Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.780601 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bfad8d5-bd15-41a8-858c-ffd981537c79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2bfad8d5-bd15-41a8-858c-ffd981537c79" (UID: "2bfad8d5-bd15-41a8-858c-ffd981537c79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.782195 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05a13f5b-ba5a-4fe2-b395-29562d21fd40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "05a13f5b-ba5a-4fe2-b395-29562d21fd40" (UID: "05a13f5b-ba5a-4fe2-b395-29562d21fd40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.783057 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5934f8bc-1134-40af-8af2-57ffcbfddda3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5934f8bc-1134-40af-8af2-57ffcbfddda3" (UID: "5934f8bc-1134-40af-8af2-57ffcbfddda3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.783605 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4f85271-c4d1-43fe-95ad-b88443d14a9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4f85271-c4d1-43fe-95ad-b88443d14a9a" (UID: "e4f85271-c4d1-43fe-95ad-b88443d14a9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.784127 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5934f8bc-1134-40af-8af2-57ffcbfddda3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.784140 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bfad8d5-bd15-41a8-858c-ffd981537c79-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.784148 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05a13f5b-ba5a-4fe2-b395-29562d21fd40-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.784157 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4f85271-c4d1-43fe-95ad-b88443d14a9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.792395 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bfad8d5-bd15-41a8-858c-ffd981537c79-kube-api-access-8nv4h" (OuterVolumeSpecName: "kube-api-access-8nv4h") pod "2bfad8d5-bd15-41a8-858c-ffd981537c79" (UID: "2bfad8d5-bd15-41a8-858c-ffd981537c79"). InnerVolumeSpecName "kube-api-access-8nv4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.792493 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4f85271-c4d1-43fe-95ad-b88443d14a9a-kube-api-access-rg6w4" (OuterVolumeSpecName: "kube-api-access-rg6w4") pod "e4f85271-c4d1-43fe-95ad-b88443d14a9a" (UID: "e4f85271-c4d1-43fe-95ad-b88443d14a9a"). InnerVolumeSpecName "kube-api-access-rg6w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.792631 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05a13f5b-ba5a-4fe2-b395-29562d21fd40-kube-api-access-cwk9f" (OuterVolumeSpecName: "kube-api-access-cwk9f") pod "05a13f5b-ba5a-4fe2-b395-29562d21fd40" (UID: "05a13f5b-ba5a-4fe2-b395-29562d21fd40"). InnerVolumeSpecName "kube-api-access-cwk9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.792679 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5934f8bc-1134-40af-8af2-57ffcbfddda3-kube-api-access-7cxrf" (OuterVolumeSpecName: "kube-api-access-7cxrf") pod "5934f8bc-1134-40af-8af2-57ffcbfddda3" (UID: "5934f8bc-1134-40af-8af2-57ffcbfddda3"). InnerVolumeSpecName "kube-api-access-7cxrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.886293 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cxrf\" (UniqueName: \"kubernetes.io/projected/5934f8bc-1134-40af-8af2-57ffcbfddda3-kube-api-access-7cxrf\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.886326 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nv4h\" (UniqueName: \"kubernetes.io/projected/2bfad8d5-bd15-41a8-858c-ffd981537c79-kube-api-access-8nv4h\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.886340 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg6w4\" (UniqueName: \"kubernetes.io/projected/e4f85271-c4d1-43fe-95ad-b88443d14a9a-kube-api-access-rg6w4\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.886348 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwk9f\" (UniqueName: \"kubernetes.io/projected/05a13f5b-ba5a-4fe2-b395-29562d21fd40-kube-api-access-cwk9f\") on node \"crc\" DevicePath \"\"" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.933672 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-af11-account-create-update-sxvz7" event={"ID":"e4f85271-c4d1-43fe-95ad-b88443d14a9a","Type":"ContainerDied","Data":"3f873b5b6034932548c67f1ef641fbb6df0a56cb58cae5689535e1cf2be40dd0"} Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.933707 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f873b5b6034932548c67f1ef641fbb6df0a56cb58cae5689535e1cf2be40dd0" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.933760 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-af11-account-create-update-sxvz7" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.936075 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-72f6-account-create-update-b8qp4" event={"ID":"f52f18ff-5693-4ec1-ba5d-9df137257c40","Type":"ContainerDied","Data":"d5f2464015498d5fcf0ee89b35daaa7701c35b98332d4d92c914f2c27288a606"} Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.936096 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5f2464015498d5fcf0ee89b35daaa7701c35b98332d4d92c914f2c27288a606" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.936134 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-72f6-account-create-update-b8qp4" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.941658 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jqxt9" event={"ID":"5934f8bc-1134-40af-8af2-57ffcbfddda3","Type":"ContainerDied","Data":"a050bd5484cd4ed27d86925a8c2d78edcad2655f24137be859d576e068be4403"} Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.941693 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a050bd5484cd4ed27d86925a8c2d78edcad2655f24137be859d576e068be4403" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.941708 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jqxt9" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.946501 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-99f6t" event={"ID":"2bfad8d5-bd15-41a8-858c-ffd981537c79","Type":"ContainerDied","Data":"adc43e52bc9d98ed19859f223157dacd5dd43561459507efc2b8d3a594f301be"} Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.946526 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adc43e52bc9d98ed19859f223157dacd5dd43561459507efc2b8d3a594f301be" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.946594 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-99f6t" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.950464 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c7769-c372-44e9-a498-0081d8722c44","Type":"ContainerStarted","Data":"635822118d423276b610be5201aa1d029d83ac578cfb91cae598fe309b6138a9"} Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.950576 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.952659 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7lgq6" event={"ID":"05a13f5b-ba5a-4fe2-b395-29562d21fd40","Type":"ContainerDied","Data":"acdb2b4a8dd49d068730b4ac214c90af66dad714a5e972b69e693d2fdccf510f"} Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.952681 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acdb2b4a8dd49d068730b4ac214c90af66dad714a5e972b69e693d2fdccf510f" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.952729 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7lgq6" Jan 31 16:47:59 crc kubenswrapper[4730]: I0131 16:47:59.976671 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.213956109 podStartE2EDuration="6.976656125s" podCreationTimestamp="2026-01-31 16:47:53 +0000 UTC" firstStartedPulling="2026-01-31 16:47:53.973726359 +0000 UTC m=+1060.779783275" lastFinishedPulling="2026-01-31 16:47:59.736426375 +0000 UTC m=+1066.542483291" observedRunningTime="2026-01-31 16:47:59.972157496 +0000 UTC m=+1066.778214412" watchObservedRunningTime="2026-01-31 16:47:59.976656125 +0000 UTC m=+1066.782713041" Jan 31 16:48:00 crc kubenswrapper[4730]: I0131 16:48:00.049355 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:48:00 crc kubenswrapper[4730]: I0131 16:48:00.660380 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:48:00 crc kubenswrapper[4730]: I0131 16:48:00.669992 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:48:01 crc kubenswrapper[4730]: I0131 16:48:01.967318 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c47c7769-c372-44e9-a498-0081d8722c44" containerName="ceilometer-central-agent" containerID="cri-o://b66b6c134d271e018e20d9c8105712cbbd8c236175728084afae442cafed5b20" gracePeriod=30 Jan 31 16:48:01 crc kubenswrapper[4730]: I0131 16:48:01.967456 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c47c7769-c372-44e9-a498-0081d8722c44" containerName="proxy-httpd" containerID="cri-o://635822118d423276b610be5201aa1d029d83ac578cfb91cae598fe309b6138a9" gracePeriod=30 Jan 31 16:48:01 crc kubenswrapper[4730]: I0131 16:48:01.967500 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c47c7769-c372-44e9-a498-0081d8722c44" containerName="ceilometer-notification-agent" containerID="cri-o://c010f7f6be66256a7949e2f1609b29a8035616a556839c086035a5d7043af72d" gracePeriod=30 Jan 31 16:48:01 crc kubenswrapper[4730]: I0131 16:48:01.967418 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c47c7769-c372-44e9-a498-0081d8722c44" containerName="sg-core" containerID="cri-o://c4eaadd5feb27041b296604f845f8e37646d5b50a5be1403182ecc2abbc0c48e" gracePeriod=30 Jan 31 16:48:02 crc kubenswrapper[4730]: I0131 16:48:02.986529 4730 generic.go:334] "Generic (PLEG): container finished" podID="c47c7769-c372-44e9-a498-0081d8722c44" containerID="635822118d423276b610be5201aa1d029d83ac578cfb91cae598fe309b6138a9" exitCode=0 Jan 31 16:48:02 crc kubenswrapper[4730]: I0131 16:48:02.987072 4730 generic.go:334] "Generic (PLEG): container finished" podID="c47c7769-c372-44e9-a498-0081d8722c44" containerID="c4eaadd5feb27041b296604f845f8e37646d5b50a5be1403182ecc2abbc0c48e" exitCode=2 Jan 31 16:48:02 crc kubenswrapper[4730]: I0131 16:48:02.987083 4730 generic.go:334] "Generic (PLEG): container finished" podID="c47c7769-c372-44e9-a498-0081d8722c44" containerID="c010f7f6be66256a7949e2f1609b29a8035616a556839c086035a5d7043af72d" exitCode=0 Jan 31 16:48:02 crc kubenswrapper[4730]: I0131 16:48:02.986574 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c7769-c372-44e9-a498-0081d8722c44","Type":"ContainerDied","Data":"635822118d423276b610be5201aa1d029d83ac578cfb91cae598fe309b6138a9"} Jan 31 16:48:02 crc kubenswrapper[4730]: I0131 16:48:02.987120 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c7769-c372-44e9-a498-0081d8722c44","Type":"ContainerDied","Data":"c4eaadd5feb27041b296604f845f8e37646d5b50a5be1403182ecc2abbc0c48e"} Jan 31 16:48:02 crc kubenswrapper[4730]: I0131 16:48:02.987136 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c7769-c372-44e9-a498-0081d8722c44","Type":"ContainerDied","Data":"c010f7f6be66256a7949e2f1609b29a8035616a556839c086035a5d7043af72d"} Jan 31 16:48:03 crc kubenswrapper[4730]: I0131 16:48:03.658539 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:48:03 crc kubenswrapper[4730]: I0131 16:48:03.658624 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:48:03 crc kubenswrapper[4730]: I0131 16:48:03.659572 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"e41232a60f932a62d2c5b9d50e9136223d043e8df15499b24ac0f32e2a9687f5"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 16:48:03 crc kubenswrapper[4730]: I0131 16:48:03.659599 4730 scope.go:117] "RemoveContainer" containerID="14c8cd1386a4bfd252f26bfcd129b0d347728b89bfbf7f1420a214ce4f84f868" Jan 31 16:48:03 crc kubenswrapper[4730]: I0131 16:48:03.659629 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://e41232a60f932a62d2c5b9d50e9136223d043e8df15499b24ac0f32e2a9687f5" gracePeriod=30 Jan 31 16:48:03 crc kubenswrapper[4730]: I0131 16:48:03.662352 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.050256 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="e41232a60f932a62d2c5b9d50e9136223d043e8df15499b24ac0f32e2a9687f5" exitCode=0 Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.050771 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"e41232a60f932a62d2c5b9d50e9136223d043e8df15499b24ac0f32e2a9687f5"} Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.050885 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"6dd00137c2b55ba8911a6cf41645bd5bc9fe9443ee82beb8e8fc3780dbabffec"} Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.050907 4730 scope.go:117] "RemoveContainer" containerID="3a0da846102d23267c09424d464bd75d31e24499d0a838028b36d95521a34e92" Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.572903 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.671645 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c7769-c372-44e9-a498-0081d8722c44-run-httpd\") pod \"c47c7769-c372-44e9-a498-0081d8722c44\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.671761 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c7769-c372-44e9-a498-0081d8722c44-log-httpd\") pod \"c47c7769-c372-44e9-a498-0081d8722c44\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.671809 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-sg-core-conf-yaml\") pod \"c47c7769-c372-44e9-a498-0081d8722c44\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.671892 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-scripts\") pod \"c47c7769-c372-44e9-a498-0081d8722c44\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.671914 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-config-data\") pod \"c47c7769-c372-44e9-a498-0081d8722c44\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.671959 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-combined-ca-bundle\") pod \"c47c7769-c372-44e9-a498-0081d8722c44\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.671995 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mglvw\" (UniqueName: \"kubernetes.io/projected/c47c7769-c372-44e9-a498-0081d8722c44-kube-api-access-mglvw\") pod \"c47c7769-c372-44e9-a498-0081d8722c44\" (UID: \"c47c7769-c372-44e9-a498-0081d8722c44\") " Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.672195 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c47c7769-c372-44e9-a498-0081d8722c44-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c47c7769-c372-44e9-a498-0081d8722c44" (UID: "c47c7769-c372-44e9-a498-0081d8722c44"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.672389 4730 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c7769-c372-44e9-a498-0081d8722c44-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.672748 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c47c7769-c372-44e9-a498-0081d8722c44-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c47c7769-c372-44e9-a498-0081d8722c44" (UID: "c47c7769-c372-44e9-a498-0081d8722c44"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.688996 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c47c7769-c372-44e9-a498-0081d8722c44-kube-api-access-mglvw" (OuterVolumeSpecName: "kube-api-access-mglvw") pod "c47c7769-c372-44e9-a498-0081d8722c44" (UID: "c47c7769-c372-44e9-a498-0081d8722c44"). InnerVolumeSpecName "kube-api-access-mglvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.689091 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-scripts" (OuterVolumeSpecName: "scripts") pod "c47c7769-c372-44e9-a498-0081d8722c44" (UID: "c47c7769-c372-44e9-a498-0081d8722c44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.729938 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c47c7769-c372-44e9-a498-0081d8722c44" (UID: "c47c7769-c372-44e9-a498-0081d8722c44"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.761825 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c47c7769-c372-44e9-a498-0081d8722c44" (UID: "c47c7769-c372-44e9-a498-0081d8722c44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.779688 4730 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c47c7769-c372-44e9-a498-0081d8722c44-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.779733 4730 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.779743 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.779751 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.779760 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mglvw\" (UniqueName: \"kubernetes.io/projected/c47c7769-c372-44e9-a498-0081d8722c44-kube-api-access-mglvw\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.781202 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-config-data" (OuterVolumeSpecName: "config-data") pod "c47c7769-c372-44e9-a498-0081d8722c44" (UID: "c47c7769-c372-44e9-a498-0081d8722c44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:04 crc kubenswrapper[4730]: I0131 16:48:04.881577 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c47c7769-c372-44e9-a498-0081d8722c44-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.061443 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"896655c848b0a2b76d2a95800ef2fd1846e710c7780f0bfc559f734b6b875bd2"} Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.061582 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.061767 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.063507 4730 generic.go:334] "Generic (PLEG): container finished" podID="c47c7769-c372-44e9-a498-0081d8722c44" containerID="b66b6c134d271e018e20d9c8105712cbbd8c236175728084afae442cafed5b20" exitCode=0 Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.063553 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c7769-c372-44e9-a498-0081d8722c44","Type":"ContainerDied","Data":"b66b6c134d271e018e20d9c8105712cbbd8c236175728084afae442cafed5b20"} Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.063578 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.063602 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c47c7769-c372-44e9-a498-0081d8722c44","Type":"ContainerDied","Data":"382b30ca8ae4397ab3331c3876c3c3a7ed888b5339b2bbf54004b615123d9f1f"} Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.063621 4730 scope.go:117] "RemoveContainer" containerID="635822118d423276b610be5201aa1d029d83ac578cfb91cae598fe309b6138a9" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.111294 4730 scope.go:117] "RemoveContainer" containerID="c4eaadd5feb27041b296604f845f8e37646d5b50a5be1403182ecc2abbc0c48e" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.128447 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.134489 4730 scope.go:117] "RemoveContainer" containerID="c010f7f6be66256a7949e2f1609b29a8035616a556839c086035a5d7043af72d" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.139285 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.157203 4730 scope.go:117] "RemoveContainer" containerID="b66b6c134d271e018e20d9c8105712cbbd8c236175728084afae442cafed5b20" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.167638 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:48:05 crc kubenswrapper[4730]: E0131 16:48:05.168049 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47c7769-c372-44e9-a498-0081d8722c44" containerName="sg-core" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168070 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47c7769-c372-44e9-a498-0081d8722c44" containerName="sg-core" Jan 31 16:48:05 crc kubenswrapper[4730]: E0131 16:48:05.168096 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bfad8d5-bd15-41a8-858c-ffd981537c79" containerName="mariadb-database-create" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168102 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bfad8d5-bd15-41a8-858c-ffd981537c79" containerName="mariadb-database-create" Jan 31 16:48:05 crc kubenswrapper[4730]: E0131 16:48:05.168110 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4f85271-c4d1-43fe-95ad-b88443d14a9a" containerName="mariadb-account-create-update" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168117 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4f85271-c4d1-43fe-95ad-b88443d14a9a" containerName="mariadb-account-create-update" Jan 31 16:48:05 crc kubenswrapper[4730]: E0131 16:48:05.168135 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47c7769-c372-44e9-a498-0081d8722c44" containerName="ceilometer-notification-agent" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168141 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47c7769-c372-44e9-a498-0081d8722c44" containerName="ceilometer-notification-agent" Jan 31 16:48:05 crc kubenswrapper[4730]: E0131 16:48:05.168151 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05a13f5b-ba5a-4fe2-b395-29562d21fd40" containerName="mariadb-database-create" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168156 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="05a13f5b-ba5a-4fe2-b395-29562d21fd40" containerName="mariadb-database-create" Jan 31 16:48:05 crc kubenswrapper[4730]: E0131 16:48:05.168170 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47c7769-c372-44e9-a498-0081d8722c44" containerName="proxy-httpd" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168176 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47c7769-c372-44e9-a498-0081d8722c44" containerName="proxy-httpd" Jan 31 16:48:05 crc kubenswrapper[4730]: E0131 16:48:05.168194 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5934f8bc-1134-40af-8af2-57ffcbfddda3" containerName="mariadb-database-create" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168200 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="5934f8bc-1134-40af-8af2-57ffcbfddda3" containerName="mariadb-database-create" Jan 31 16:48:05 crc kubenswrapper[4730]: E0131 16:48:05.168209 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47c7769-c372-44e9-a498-0081d8722c44" containerName="ceilometer-central-agent" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168214 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47c7769-c372-44e9-a498-0081d8722c44" containerName="ceilometer-central-agent" Jan 31 16:48:05 crc kubenswrapper[4730]: E0131 16:48:05.168224 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="723811c5-3b5b-4e22-806c-682826895b32" containerName="mariadb-account-create-update" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168229 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="723811c5-3b5b-4e22-806c-682826895b32" containerName="mariadb-account-create-update" Jan 31 16:48:05 crc kubenswrapper[4730]: E0131 16:48:05.168239 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f52f18ff-5693-4ec1-ba5d-9df137257c40" containerName="mariadb-account-create-update" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168244 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f52f18ff-5693-4ec1-ba5d-9df137257c40" containerName="mariadb-account-create-update" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168418 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c47c7769-c372-44e9-a498-0081d8722c44" containerName="ceilometer-notification-agent" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168430 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bfad8d5-bd15-41a8-858c-ffd981537c79" containerName="mariadb-database-create" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168438 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="05a13f5b-ba5a-4fe2-b395-29562d21fd40" containerName="mariadb-database-create" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168453 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c47c7769-c372-44e9-a498-0081d8722c44" containerName="ceilometer-central-agent" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168465 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4f85271-c4d1-43fe-95ad-b88443d14a9a" containerName="mariadb-account-create-update" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168476 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f52f18ff-5693-4ec1-ba5d-9df137257c40" containerName="mariadb-account-create-update" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168487 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="5934f8bc-1134-40af-8af2-57ffcbfddda3" containerName="mariadb-database-create" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168495 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c47c7769-c372-44e9-a498-0081d8722c44" containerName="proxy-httpd" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168502 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="723811c5-3b5b-4e22-806c-682826895b32" containerName="mariadb-account-create-update" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.168514 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c47c7769-c372-44e9-a498-0081d8722c44" containerName="sg-core" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.170188 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.174347 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.175099 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.185728 4730 scope.go:117] "RemoveContainer" containerID="635822118d423276b610be5201aa1d029d83ac578cfb91cae598fe309b6138a9" Jan 31 16:48:05 crc kubenswrapper[4730]: E0131 16:48:05.186378 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"635822118d423276b610be5201aa1d029d83ac578cfb91cae598fe309b6138a9\": container with ID starting with 635822118d423276b610be5201aa1d029d83ac578cfb91cae598fe309b6138a9 not found: ID does not exist" containerID="635822118d423276b610be5201aa1d029d83ac578cfb91cae598fe309b6138a9" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.186413 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"635822118d423276b610be5201aa1d029d83ac578cfb91cae598fe309b6138a9"} err="failed to get container status \"635822118d423276b610be5201aa1d029d83ac578cfb91cae598fe309b6138a9\": rpc error: code = NotFound desc = could not find container \"635822118d423276b610be5201aa1d029d83ac578cfb91cae598fe309b6138a9\": container with ID starting with 635822118d423276b610be5201aa1d029d83ac578cfb91cae598fe309b6138a9 not found: ID does not exist" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.186440 4730 scope.go:117] "RemoveContainer" containerID="c4eaadd5feb27041b296604f845f8e37646d5b50a5be1403182ecc2abbc0c48e" Jan 31 16:48:05 crc kubenswrapper[4730]: E0131 16:48:05.186679 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4eaadd5feb27041b296604f845f8e37646d5b50a5be1403182ecc2abbc0c48e\": container with ID starting with c4eaadd5feb27041b296604f845f8e37646d5b50a5be1403182ecc2abbc0c48e not found: ID does not exist" containerID="c4eaadd5feb27041b296604f845f8e37646d5b50a5be1403182ecc2abbc0c48e" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.186693 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4eaadd5feb27041b296604f845f8e37646d5b50a5be1403182ecc2abbc0c48e"} err="failed to get container status \"c4eaadd5feb27041b296604f845f8e37646d5b50a5be1403182ecc2abbc0c48e\": rpc error: code = NotFound desc = could not find container \"c4eaadd5feb27041b296604f845f8e37646d5b50a5be1403182ecc2abbc0c48e\": container with ID starting with c4eaadd5feb27041b296604f845f8e37646d5b50a5be1403182ecc2abbc0c48e not found: ID does not exist" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.186705 4730 scope.go:117] "RemoveContainer" containerID="c010f7f6be66256a7949e2f1609b29a8035616a556839c086035a5d7043af72d" Jan 31 16:48:05 crc kubenswrapper[4730]: E0131 16:48:05.186967 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c010f7f6be66256a7949e2f1609b29a8035616a556839c086035a5d7043af72d\": container with ID starting with c010f7f6be66256a7949e2f1609b29a8035616a556839c086035a5d7043af72d not found: ID does not exist" containerID="c010f7f6be66256a7949e2f1609b29a8035616a556839c086035a5d7043af72d" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.186983 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c010f7f6be66256a7949e2f1609b29a8035616a556839c086035a5d7043af72d"} err="failed to get container status \"c010f7f6be66256a7949e2f1609b29a8035616a556839c086035a5d7043af72d\": rpc error: code = NotFound desc = could not find container \"c010f7f6be66256a7949e2f1609b29a8035616a556839c086035a5d7043af72d\": container with ID starting with c010f7f6be66256a7949e2f1609b29a8035616a556839c086035a5d7043af72d not found: ID does not exist" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.186997 4730 scope.go:117] "RemoveContainer" containerID="b66b6c134d271e018e20d9c8105712cbbd8c236175728084afae442cafed5b20" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.187716 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:48:05 crc kubenswrapper[4730]: E0131 16:48:05.188195 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b66b6c134d271e018e20d9c8105712cbbd8c236175728084afae442cafed5b20\": container with ID starting with b66b6c134d271e018e20d9c8105712cbbd8c236175728084afae442cafed5b20 not found: ID does not exist" containerID="b66b6c134d271e018e20d9c8105712cbbd8c236175728084afae442cafed5b20" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.188231 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b66b6c134d271e018e20d9c8105712cbbd8c236175728084afae442cafed5b20"} err="failed to get container status \"b66b6c134d271e018e20d9c8105712cbbd8c236175728084afae442cafed5b20\": rpc error: code = NotFound desc = could not find container \"b66b6c134d271e018e20d9c8105712cbbd8c236175728084afae442cafed5b20\": container with ID starting with b66b6c134d271e018e20d9c8105712cbbd8c236175728084afae442cafed5b20 not found: ID does not exist" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.292311 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh8tf\" (UniqueName: \"kubernetes.io/projected/72340753-9253-4020-a57d-a7d3ae42a591-kube-api-access-gh8tf\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.292368 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72340753-9253-4020-a57d-a7d3ae42a591-run-httpd\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.292422 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.292442 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-config-data\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.292461 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-scripts\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.292554 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72340753-9253-4020-a57d-a7d3ae42a591-log-httpd\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.292597 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.333790 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dvj5l"] Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.334793 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dvj5l" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.337158 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-flwwt" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.337380 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.337852 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.364903 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dvj5l"] Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.394477 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dvj5l\" (UID: \"05019b69-099e-4b89-b072-ea6b1f2019e3\") " pod="openstack/nova-cell0-conductor-db-sync-dvj5l" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.394681 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72340753-9253-4020-a57d-a7d3ae42a591-log-httpd\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.394721 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-config-data\") pod \"nova-cell0-conductor-db-sync-dvj5l\" (UID: \"05019b69-099e-4b89-b072-ea6b1f2019e3\") " pod="openstack/nova-cell0-conductor-db-sync-dvj5l" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.394749 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.394768 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js484\" (UniqueName: \"kubernetes.io/projected/05019b69-099e-4b89-b072-ea6b1f2019e3-kube-api-access-js484\") pod \"nova-cell0-conductor-db-sync-dvj5l\" (UID: \"05019b69-099e-4b89-b072-ea6b1f2019e3\") " pod="openstack/nova-cell0-conductor-db-sync-dvj5l" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.394868 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh8tf\" (UniqueName: \"kubernetes.io/projected/72340753-9253-4020-a57d-a7d3ae42a591-kube-api-access-gh8tf\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.394912 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72340753-9253-4020-a57d-a7d3ae42a591-run-httpd\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.394956 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.395105 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-scripts\") pod \"nova-cell0-conductor-db-sync-dvj5l\" (UID: \"05019b69-099e-4b89-b072-ea6b1f2019e3\") " pod="openstack/nova-cell0-conductor-db-sync-dvj5l" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.395141 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-config-data\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.395162 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-scripts\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.396605 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72340753-9253-4020-a57d-a7d3ae42a591-log-httpd\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.397083 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72340753-9253-4020-a57d-a7d3ae42a591-run-httpd\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.401871 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-scripts\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.404788 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.405704 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-config-data\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.410450 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.447205 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh8tf\" (UniqueName: \"kubernetes.io/projected/72340753-9253-4020-a57d-a7d3ae42a591-kube-api-access-gh8tf\") pod \"ceilometer-0\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.485763 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.496932 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-config-data\") pod \"nova-cell0-conductor-db-sync-dvj5l\" (UID: \"05019b69-099e-4b89-b072-ea6b1f2019e3\") " pod="openstack/nova-cell0-conductor-db-sync-dvj5l" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.497189 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js484\" (UniqueName: \"kubernetes.io/projected/05019b69-099e-4b89-b072-ea6b1f2019e3-kube-api-access-js484\") pod \"nova-cell0-conductor-db-sync-dvj5l\" (UID: \"05019b69-099e-4b89-b072-ea6b1f2019e3\") " pod="openstack/nova-cell0-conductor-db-sync-dvj5l" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.497283 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-scripts\") pod \"nova-cell0-conductor-db-sync-dvj5l\" (UID: \"05019b69-099e-4b89-b072-ea6b1f2019e3\") " pod="openstack/nova-cell0-conductor-db-sync-dvj5l" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.497322 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dvj5l\" (UID: \"05019b69-099e-4b89-b072-ea6b1f2019e3\") " pod="openstack/nova-cell0-conductor-db-sync-dvj5l" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.500952 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dvj5l\" (UID: \"05019b69-099e-4b89-b072-ea6b1f2019e3\") " pod="openstack/nova-cell0-conductor-db-sync-dvj5l" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.501257 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-scripts\") pod \"nova-cell0-conductor-db-sync-dvj5l\" (UID: \"05019b69-099e-4b89-b072-ea6b1f2019e3\") " pod="openstack/nova-cell0-conductor-db-sync-dvj5l" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.504289 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-config-data\") pod \"nova-cell0-conductor-db-sync-dvj5l\" (UID: \"05019b69-099e-4b89-b072-ea6b1f2019e3\") " pod="openstack/nova-cell0-conductor-db-sync-dvj5l" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.517740 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js484\" (UniqueName: \"kubernetes.io/projected/05019b69-099e-4b89-b072-ea6b1f2019e3-kube-api-access-js484\") pod \"nova-cell0-conductor-db-sync-dvj5l\" (UID: \"05019b69-099e-4b89-b072-ea6b1f2019e3\") " pod="openstack/nova-cell0-conductor-db-sync-dvj5l" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.649149 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dvj5l" Jan 31 16:48:05 crc kubenswrapper[4730]: I0131 16:48:05.956200 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:48:05 crc kubenswrapper[4730]: W0131 16:48:05.964144 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72340753_9253_4020_a57d_a7d3ae42a591.slice/crio-c212b0deff9f5bec56b5c82d6a43f027869f71badb11a543c1bf2d11919c45fc WatchSource:0}: Error finding container c212b0deff9f5bec56b5c82d6a43f027869f71badb11a543c1bf2d11919c45fc: Status 404 returned error can't find the container with id c212b0deff9f5bec56b5c82d6a43f027869f71badb11a543c1bf2d11919c45fc Jan 31 16:48:06 crc kubenswrapper[4730]: I0131 16:48:06.074633 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="896655c848b0a2b76d2a95800ef2fd1846e710c7780f0bfc559f734b6b875bd2" exitCode=1 Jan 31 16:48:06 crc kubenswrapper[4730]: I0131 16:48:06.074684 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"896655c848b0a2b76d2a95800ef2fd1846e710c7780f0bfc559f734b6b875bd2"} Jan 31 16:48:06 crc kubenswrapper[4730]: I0131 16:48:06.074713 4730 scope.go:117] "RemoveContainer" containerID="14c8cd1386a4bfd252f26bfcd129b0d347728b89bfbf7f1420a214ce4f84f868" Jan 31 16:48:06 crc kubenswrapper[4730]: I0131 16:48:06.075319 4730 scope.go:117] "RemoveContainer" containerID="896655c848b0a2b76d2a95800ef2fd1846e710c7780f0bfc559f734b6b875bd2" Jan 31 16:48:06 crc kubenswrapper[4730]: E0131 16:48:06.075509 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:48:06 crc kubenswrapper[4730]: I0131 16:48:06.077429 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72340753-9253-4020-a57d-a7d3ae42a591","Type":"ContainerStarted","Data":"c212b0deff9f5bec56b5c82d6a43f027869f71badb11a543c1bf2d11919c45fc"} Jan 31 16:48:06 crc kubenswrapper[4730]: I0131 16:48:06.175536 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dvj5l"] Jan 31 16:48:06 crc kubenswrapper[4730]: I0131 16:48:06.502400 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c47c7769-c372-44e9-a498-0081d8722c44" path="/var/lib/kubelet/pods/c47c7769-c372-44e9-a498-0081d8722c44/volumes" Jan 31 16:48:06 crc kubenswrapper[4730]: I0131 16:48:06.654044 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:48:07 crc kubenswrapper[4730]: I0131 16:48:07.102735 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72340753-9253-4020-a57d-a7d3ae42a591","Type":"ContainerStarted","Data":"59c27b2420e1c8a50d3937d294c4e9115845a7bc50c2b6f6ccab0ff4c1388106"} Jan 31 16:48:07 crc kubenswrapper[4730]: I0131 16:48:07.108753 4730 scope.go:117] "RemoveContainer" containerID="896655c848b0a2b76d2a95800ef2fd1846e710c7780f0bfc559f734b6b875bd2" Jan 31 16:48:07 crc kubenswrapper[4730]: E0131 16:48:07.108959 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:48:07 crc kubenswrapper[4730]: I0131 16:48:07.110969 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dvj5l" event={"ID":"05019b69-099e-4b89-b072-ea6b1f2019e3","Type":"ContainerStarted","Data":"5cb7f64590e5ac2a8f596b39040e4b0e9bc45547287f37742fa4a92fc23b7d59"} Jan 31 16:48:08 crc kubenswrapper[4730]: I0131 16:48:08.121906 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72340753-9253-4020-a57d-a7d3ae42a591","Type":"ContainerStarted","Data":"7f0223326ff214908a7a23ae64c3547681abdc56218527aae53c4f1fe52700c1"} Jan 31 16:48:08 crc kubenswrapper[4730]: I0131 16:48:08.122613 4730 scope.go:117] "RemoveContainer" containerID="896655c848b0a2b76d2a95800ef2fd1846e710c7780f0bfc559f734b6b875bd2" Jan 31 16:48:08 crc kubenswrapper[4730]: E0131 16:48:08.122908 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:48:08 crc kubenswrapper[4730]: I0131 16:48:08.134093 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:48:09 crc kubenswrapper[4730]: I0131 16:48:09.152566 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72340753-9253-4020-a57d-a7d3ae42a591","Type":"ContainerStarted","Data":"3d5dd15e4a20bc39d4fcc86e986143837ed9e6e223c3652dc58a1055f08ef43b"} Jan 31 16:48:09 crc kubenswrapper[4730]: I0131 16:48:09.686538 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:48:10 crc kubenswrapper[4730]: I0131 16:48:10.612017 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7788464654-cr95d" Jan 31 16:48:10 crc kubenswrapper[4730]: I0131 16:48:10.637640 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:48:10 crc kubenswrapper[4730]: I0131 16:48:10.659822 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:48:11 crc kubenswrapper[4730]: I0131 16:48:11.206251 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72340753-9253-4020-a57d-a7d3ae42a591","Type":"ContainerStarted","Data":"f8da133e4be9907adcfea8bdf6ed7369a7cb2e2ead36dd20cbd36e14451d9e4f"} Jan 31 16:48:11 crc kubenswrapper[4730]: I0131 16:48:11.206454 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 16:48:11 crc kubenswrapper[4730]: I0131 16:48:11.464555 4730 scope.go:117] "RemoveContainer" containerID="e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c" Jan 31 16:48:11 crc kubenswrapper[4730]: I0131 16:48:11.464637 4730 scope.go:117] "RemoveContainer" containerID="fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4" Jan 31 16:48:11 crc kubenswrapper[4730]: I0131 16:48:11.464719 4730 scope.go:117] "RemoveContainer" containerID="78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff" Jan 31 16:48:11 crc kubenswrapper[4730]: E0131 16:48:11.465187 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:48:11 crc kubenswrapper[4730]: I0131 16:48:11.870950 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.497073177 podStartE2EDuration="6.870928465s" podCreationTimestamp="2026-01-31 16:48:05 +0000 UTC" firstStartedPulling="2026-01-31 16:48:05.966130445 +0000 UTC m=+1072.772187361" lastFinishedPulling="2026-01-31 16:48:10.339985733 +0000 UTC m=+1077.146042649" observedRunningTime="2026-01-31 16:48:11.23096921 +0000 UTC m=+1078.037026126" watchObservedRunningTime="2026-01-31 16:48:11.870928465 +0000 UTC m=+1078.676985381" Jan 31 16:48:11 crc kubenswrapper[4730]: I0131 16:48:11.883705 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:48:12 crc kubenswrapper[4730]: I0131 16:48:12.675159 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:48:13 crc kubenswrapper[4730]: I0131 16:48:13.236955 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7788464654-cr95d" Jan 31 16:48:13 crc kubenswrapper[4730]: I0131 16:48:13.243442 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72340753-9253-4020-a57d-a7d3ae42a591" containerName="ceilometer-central-agent" containerID="cri-o://59c27b2420e1c8a50d3937d294c4e9115845a7bc50c2b6f6ccab0ff4c1388106" gracePeriod=30 Jan 31 16:48:13 crc kubenswrapper[4730]: I0131 16:48:13.243458 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72340753-9253-4020-a57d-a7d3ae42a591" containerName="sg-core" containerID="cri-o://3d5dd15e4a20bc39d4fcc86e986143837ed9e6e223c3652dc58a1055f08ef43b" gracePeriod=30 Jan 31 16:48:13 crc kubenswrapper[4730]: I0131 16:48:13.243458 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72340753-9253-4020-a57d-a7d3ae42a591" containerName="proxy-httpd" containerID="cri-o://f8da133e4be9907adcfea8bdf6ed7369a7cb2e2ead36dd20cbd36e14451d9e4f" gracePeriod=30 Jan 31 16:48:13 crc kubenswrapper[4730]: I0131 16:48:13.243486 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72340753-9253-4020-a57d-a7d3ae42a591" containerName="ceilometer-notification-agent" containerID="cri-o://7f0223326ff214908a7a23ae64c3547681abdc56218527aae53c4f1fe52700c1" gracePeriod=30 Jan 31 16:48:13 crc kubenswrapper[4730]: I0131 16:48:13.305579 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b5bd455fb-h66br"] Jan 31 16:48:13 crc kubenswrapper[4730]: I0131 16:48:13.306015 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-b5bd455fb-h66br" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon-log" containerID="cri-o://31c3f1d338e9abdfe52a8ea48e754f02a316f206eec6752e7c454b2a52955b20" gracePeriod=30 Jan 31 16:48:13 crc kubenswrapper[4730]: I0131 16:48:13.308687 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-b5bd455fb-h66br" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon" containerID="cri-o://80ae24fe31870e02341eacd37399cd3d3009e58750f2e437dca5b64be6345b4d" gracePeriod=30 Jan 31 16:48:13 crc kubenswrapper[4730]: I0131 16:48:13.333622 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-b5bd455fb-h66br" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 31 16:48:13 crc kubenswrapper[4730]: I0131 16:48:13.349674 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-b5bd455fb-h66br" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 31 16:48:14 crc kubenswrapper[4730]: I0131 16:48:14.255449 4730 generic.go:334] "Generic (PLEG): container finished" podID="72340753-9253-4020-a57d-a7d3ae42a591" containerID="f8da133e4be9907adcfea8bdf6ed7369a7cb2e2ead36dd20cbd36e14451d9e4f" exitCode=0 Jan 31 16:48:14 crc kubenswrapper[4730]: I0131 16:48:14.255483 4730 generic.go:334] "Generic (PLEG): container finished" podID="72340753-9253-4020-a57d-a7d3ae42a591" containerID="3d5dd15e4a20bc39d4fcc86e986143837ed9e6e223c3652dc58a1055f08ef43b" exitCode=2 Jan 31 16:48:14 crc kubenswrapper[4730]: I0131 16:48:14.255490 4730 generic.go:334] "Generic (PLEG): container finished" podID="72340753-9253-4020-a57d-a7d3ae42a591" containerID="7f0223326ff214908a7a23ae64c3547681abdc56218527aae53c4f1fe52700c1" exitCode=0 Jan 31 16:48:14 crc kubenswrapper[4730]: I0131 16:48:14.255509 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72340753-9253-4020-a57d-a7d3ae42a591","Type":"ContainerDied","Data":"f8da133e4be9907adcfea8bdf6ed7369a7cb2e2ead36dd20cbd36e14451d9e4f"} Jan 31 16:48:14 crc kubenswrapper[4730]: I0131 16:48:14.255532 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72340753-9253-4020-a57d-a7d3ae42a591","Type":"ContainerDied","Data":"3d5dd15e4a20bc39d4fcc86e986143837ed9e6e223c3652dc58a1055f08ef43b"} Jan 31 16:48:14 crc kubenswrapper[4730]: I0131 16:48:14.255542 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72340753-9253-4020-a57d-a7d3ae42a591","Type":"ContainerDied","Data":"7f0223326ff214908a7a23ae64c3547681abdc56218527aae53c4f1fe52700c1"} Jan 31 16:48:15 crc kubenswrapper[4730]: I0131 16:48:15.659171 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:48:15 crc kubenswrapper[4730]: I0131 16:48:15.660351 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:48:15 crc kubenswrapper[4730]: I0131 16:48:15.660387 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:48:15 crc kubenswrapper[4730]: I0131 16:48:15.661125 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"6dd00137c2b55ba8911a6cf41645bd5bc9fe9443ee82beb8e8fc3780dbabffec"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 16:48:15 crc kubenswrapper[4730]: I0131 16:48:15.661144 4730 scope.go:117] "RemoveContainer" containerID="896655c848b0a2b76d2a95800ef2fd1846e710c7780f0bfc559f734b6b875bd2" Jan 31 16:48:15 crc kubenswrapper[4730]: I0131 16:48:15.661165 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://6dd00137c2b55ba8911a6cf41645bd5bc9fe9443ee82beb8e8fc3780dbabffec" gracePeriod=30 Jan 31 16:48:15 crc kubenswrapper[4730]: I0131 16:48:15.674370 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:48:16 crc kubenswrapper[4730]: I0131 16:48:16.274179 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="6dd00137c2b55ba8911a6cf41645bd5bc9fe9443ee82beb8e8fc3780dbabffec" exitCode=0 Jan 31 16:48:16 crc kubenswrapper[4730]: I0131 16:48:16.274220 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"6dd00137c2b55ba8911a6cf41645bd5bc9fe9443ee82beb8e8fc3780dbabffec"} Jan 31 16:48:16 crc kubenswrapper[4730]: I0131 16:48:16.274251 4730 scope.go:117] "RemoveContainer" containerID="e41232a60f932a62d2c5b9d50e9136223d043e8df15499b24ac0f32e2a9687f5" Jan 31 16:48:17 crc kubenswrapper[4730]: E0131 16:48:17.920424 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.236385 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.297362 4730 generic.go:334] "Generic (PLEG): container finished" podID="72340753-9253-4020-a57d-a7d3ae42a591" containerID="59c27b2420e1c8a50d3937d294c4e9115845a7bc50c2b6f6ccab0ff4c1388106" exitCode=0 Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.297428 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72340753-9253-4020-a57d-a7d3ae42a591","Type":"ContainerDied","Data":"59c27b2420e1c8a50d3937d294c4e9115845a7bc50c2b6f6ccab0ff4c1388106"} Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.297446 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.297829 4730 scope.go:117] "RemoveContainer" containerID="f8da133e4be9907adcfea8bdf6ed7369a7cb2e2ead36dd20cbd36e14451d9e4f" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.297957 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72340753-9253-4020-a57d-a7d3ae42a591","Type":"ContainerDied","Data":"c212b0deff9f5bec56b5c82d6a43f027869f71badb11a543c1bf2d11919c45fc"} Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.301007 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"7c82501473fe44da233cb5f731a3ef4645d054a7dd345473f9e244a5bc551d74"} Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.301314 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.301678 4730 scope.go:117] "RemoveContainer" containerID="896655c848b0a2b76d2a95800ef2fd1846e710c7780f0bfc559f734b6b875bd2" Jan 31 16:48:18 crc kubenswrapper[4730]: E0131 16:48:18.301955 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.309168 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dvj5l" event={"ID":"05019b69-099e-4b89-b072-ea6b1f2019e3","Type":"ContainerStarted","Data":"e9faa9b458cb34108e57efd0c24d388dbaa42765ffdbfb57bfabb208a5189567"} Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.335097 4730 scope.go:117] "RemoveContainer" containerID="3d5dd15e4a20bc39d4fcc86e986143837ed9e6e223c3652dc58a1055f08ef43b" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.347910 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-dvj5l" podStartSLOduration=1.774446079 podStartE2EDuration="13.347893086s" podCreationTimestamp="2026-01-31 16:48:05 +0000 UTC" firstStartedPulling="2026-01-31 16:48:06.186525805 +0000 UTC m=+1072.992582721" lastFinishedPulling="2026-01-31 16:48:17.759972812 +0000 UTC m=+1084.566029728" observedRunningTime="2026-01-31 16:48:18.335870295 +0000 UTC m=+1085.141927211" watchObservedRunningTime="2026-01-31 16:48:18.347893086 +0000 UTC m=+1085.153950012" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.357713 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-config-data\") pod \"72340753-9253-4020-a57d-a7d3ae42a591\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.357888 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-sg-core-conf-yaml\") pod \"72340753-9253-4020-a57d-a7d3ae42a591\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.358089 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh8tf\" (UniqueName: \"kubernetes.io/projected/72340753-9253-4020-a57d-a7d3ae42a591-kube-api-access-gh8tf\") pod \"72340753-9253-4020-a57d-a7d3ae42a591\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.358625 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72340753-9253-4020-a57d-a7d3ae42a591-log-httpd\") pod \"72340753-9253-4020-a57d-a7d3ae42a591\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.358884 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72340753-9253-4020-a57d-a7d3ae42a591-run-httpd\") pod \"72340753-9253-4020-a57d-a7d3ae42a591\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.358972 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-scripts\") pod \"72340753-9253-4020-a57d-a7d3ae42a591\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.359151 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-combined-ca-bundle\") pod \"72340753-9253-4020-a57d-a7d3ae42a591\" (UID: \"72340753-9253-4020-a57d-a7d3ae42a591\") " Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.360855 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72340753-9253-4020-a57d-a7d3ae42a591-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "72340753-9253-4020-a57d-a7d3ae42a591" (UID: "72340753-9253-4020-a57d-a7d3ae42a591"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.361913 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72340753-9253-4020-a57d-a7d3ae42a591-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "72340753-9253-4020-a57d-a7d3ae42a591" (UID: "72340753-9253-4020-a57d-a7d3ae42a591"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.365934 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-scripts" (OuterVolumeSpecName: "scripts") pod "72340753-9253-4020-a57d-a7d3ae42a591" (UID: "72340753-9253-4020-a57d-a7d3ae42a591"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.366174 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72340753-9253-4020-a57d-a7d3ae42a591-kube-api-access-gh8tf" (OuterVolumeSpecName: "kube-api-access-gh8tf") pod "72340753-9253-4020-a57d-a7d3ae42a591" (UID: "72340753-9253-4020-a57d-a7d3ae42a591"). InnerVolumeSpecName "kube-api-access-gh8tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.380004 4730 scope.go:117] "RemoveContainer" containerID="7f0223326ff214908a7a23ae64c3547681abdc56218527aae53c4f1fe52700c1" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.389556 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "72340753-9253-4020-a57d-a7d3ae42a591" (UID: "72340753-9253-4020-a57d-a7d3ae42a591"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.433951 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72340753-9253-4020-a57d-a7d3ae42a591" (UID: "72340753-9253-4020-a57d-a7d3ae42a591"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.461661 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.461697 4730 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.461712 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh8tf\" (UniqueName: \"kubernetes.io/projected/72340753-9253-4020-a57d-a7d3ae42a591-kube-api-access-gh8tf\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.461725 4730 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72340753-9253-4020-a57d-a7d3ae42a591-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.461738 4730 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72340753-9253-4020-a57d-a7d3ae42a591-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.461750 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.496074 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-config-data" (OuterVolumeSpecName: "config-data") pod "72340753-9253-4020-a57d-a7d3ae42a591" (UID: "72340753-9253-4020-a57d-a7d3ae42a591"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.558487 4730 scope.go:117] "RemoveContainer" containerID="59c27b2420e1c8a50d3937d294c4e9115845a7bc50c2b6f6ccab0ff4c1388106" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.563736 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72340753-9253-4020-a57d-a7d3ae42a591-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.576718 4730 scope.go:117] "RemoveContainer" containerID="f8da133e4be9907adcfea8bdf6ed7369a7cb2e2ead36dd20cbd36e14451d9e4f" Jan 31 16:48:18 crc kubenswrapper[4730]: E0131 16:48:18.577077 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8da133e4be9907adcfea8bdf6ed7369a7cb2e2ead36dd20cbd36e14451d9e4f\": container with ID starting with f8da133e4be9907adcfea8bdf6ed7369a7cb2e2ead36dd20cbd36e14451d9e4f not found: ID does not exist" containerID="f8da133e4be9907adcfea8bdf6ed7369a7cb2e2ead36dd20cbd36e14451d9e4f" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.577118 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8da133e4be9907adcfea8bdf6ed7369a7cb2e2ead36dd20cbd36e14451d9e4f"} err="failed to get container status \"f8da133e4be9907adcfea8bdf6ed7369a7cb2e2ead36dd20cbd36e14451d9e4f\": rpc error: code = NotFound desc = could not find container \"f8da133e4be9907adcfea8bdf6ed7369a7cb2e2ead36dd20cbd36e14451d9e4f\": container with ID starting with f8da133e4be9907adcfea8bdf6ed7369a7cb2e2ead36dd20cbd36e14451d9e4f not found: ID does not exist" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.577160 4730 scope.go:117] "RemoveContainer" containerID="3d5dd15e4a20bc39d4fcc86e986143837ed9e6e223c3652dc58a1055f08ef43b" Jan 31 16:48:18 crc kubenswrapper[4730]: E0131 16:48:18.577461 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d5dd15e4a20bc39d4fcc86e986143837ed9e6e223c3652dc58a1055f08ef43b\": container with ID starting with 3d5dd15e4a20bc39d4fcc86e986143837ed9e6e223c3652dc58a1055f08ef43b not found: ID does not exist" containerID="3d5dd15e4a20bc39d4fcc86e986143837ed9e6e223c3652dc58a1055f08ef43b" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.577494 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d5dd15e4a20bc39d4fcc86e986143837ed9e6e223c3652dc58a1055f08ef43b"} err="failed to get container status \"3d5dd15e4a20bc39d4fcc86e986143837ed9e6e223c3652dc58a1055f08ef43b\": rpc error: code = NotFound desc = could not find container \"3d5dd15e4a20bc39d4fcc86e986143837ed9e6e223c3652dc58a1055f08ef43b\": container with ID starting with 3d5dd15e4a20bc39d4fcc86e986143837ed9e6e223c3652dc58a1055f08ef43b not found: ID does not exist" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.577525 4730 scope.go:117] "RemoveContainer" containerID="7f0223326ff214908a7a23ae64c3547681abdc56218527aae53c4f1fe52700c1" Jan 31 16:48:18 crc kubenswrapper[4730]: E0131 16:48:18.577819 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f0223326ff214908a7a23ae64c3547681abdc56218527aae53c4f1fe52700c1\": container with ID starting with 7f0223326ff214908a7a23ae64c3547681abdc56218527aae53c4f1fe52700c1 not found: ID does not exist" containerID="7f0223326ff214908a7a23ae64c3547681abdc56218527aae53c4f1fe52700c1" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.577860 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f0223326ff214908a7a23ae64c3547681abdc56218527aae53c4f1fe52700c1"} err="failed to get container status \"7f0223326ff214908a7a23ae64c3547681abdc56218527aae53c4f1fe52700c1\": rpc error: code = NotFound desc = could not find container \"7f0223326ff214908a7a23ae64c3547681abdc56218527aae53c4f1fe52700c1\": container with ID starting with 7f0223326ff214908a7a23ae64c3547681abdc56218527aae53c4f1fe52700c1 not found: ID does not exist" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.577903 4730 scope.go:117] "RemoveContainer" containerID="59c27b2420e1c8a50d3937d294c4e9115845a7bc50c2b6f6ccab0ff4c1388106" Jan 31 16:48:18 crc kubenswrapper[4730]: E0131 16:48:18.578212 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59c27b2420e1c8a50d3937d294c4e9115845a7bc50c2b6f6ccab0ff4c1388106\": container with ID starting with 59c27b2420e1c8a50d3937d294c4e9115845a7bc50c2b6f6ccab0ff4c1388106 not found: ID does not exist" containerID="59c27b2420e1c8a50d3937d294c4e9115845a7bc50c2b6f6ccab0ff4c1388106" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.578243 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59c27b2420e1c8a50d3937d294c4e9115845a7bc50c2b6f6ccab0ff4c1388106"} err="failed to get container status \"59c27b2420e1c8a50d3937d294c4e9115845a7bc50c2b6f6ccab0ff4c1388106\": rpc error: code = NotFound desc = could not find container \"59c27b2420e1c8a50d3937d294c4e9115845a7bc50c2b6f6ccab0ff4c1388106\": container with ID starting with 59c27b2420e1c8a50d3937d294c4e9115845a7bc50c2b6f6ccab0ff4c1388106 not found: ID does not exist" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.636361 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.648087 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.665691 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:48:18 crc kubenswrapper[4730]: E0131 16:48:18.666103 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72340753-9253-4020-a57d-a7d3ae42a591" containerName="sg-core" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.666120 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="72340753-9253-4020-a57d-a7d3ae42a591" containerName="sg-core" Jan 31 16:48:18 crc kubenswrapper[4730]: E0131 16:48:18.666133 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72340753-9253-4020-a57d-a7d3ae42a591" containerName="proxy-httpd" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.666140 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="72340753-9253-4020-a57d-a7d3ae42a591" containerName="proxy-httpd" Jan 31 16:48:18 crc kubenswrapper[4730]: E0131 16:48:18.666165 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72340753-9253-4020-a57d-a7d3ae42a591" containerName="ceilometer-notification-agent" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.666171 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="72340753-9253-4020-a57d-a7d3ae42a591" containerName="ceilometer-notification-agent" Jan 31 16:48:18 crc kubenswrapper[4730]: E0131 16:48:18.666180 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72340753-9253-4020-a57d-a7d3ae42a591" containerName="ceilometer-central-agent" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.666187 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="72340753-9253-4020-a57d-a7d3ae42a591" containerName="ceilometer-central-agent" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.666349 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="72340753-9253-4020-a57d-a7d3ae42a591" containerName="proxy-httpd" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.666363 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="72340753-9253-4020-a57d-a7d3ae42a591" containerName="ceilometer-central-agent" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.666381 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="72340753-9253-4020-a57d-a7d3ae42a591" containerName="ceilometer-notification-agent" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.666394 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="72340753-9253-4020-a57d-a7d3ae42a591" containerName="sg-core" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.668082 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.676219 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.676884 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.677048 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.736456 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-b5bd455fb-h66br" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:55240->10.217.0.152:8443: read: connection reset by peer" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.767143 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.767203 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75ngk\" (UniqueName: \"kubernetes.io/projected/4abc3572-660b-4c33-ac87-9cb6593a92a4-kube-api-access-75ngk\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.767278 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.767329 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-scripts\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.767440 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-config-data\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.767527 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4abc3572-660b-4c33-ac87-9cb6593a92a4-log-httpd\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.767553 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4abc3572-660b-4c33-ac87-9cb6593a92a4-run-httpd\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.870826 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.870992 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-scripts\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.871041 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-config-data\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.871100 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4abc3572-660b-4c33-ac87-9cb6593a92a4-log-httpd\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.871119 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4abc3572-660b-4c33-ac87-9cb6593a92a4-run-httpd\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.871238 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.871282 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75ngk\" (UniqueName: \"kubernetes.io/projected/4abc3572-660b-4c33-ac87-9cb6593a92a4-kube-api-access-75ngk\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.871768 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4abc3572-660b-4c33-ac87-9cb6593a92a4-log-httpd\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.872047 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4abc3572-660b-4c33-ac87-9cb6593a92a4-run-httpd\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.875961 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-scripts\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.880620 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-config-data\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.883297 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.887362 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.893546 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75ngk\" (UniqueName: \"kubernetes.io/projected/4abc3572-660b-4c33-ac87-9cb6593a92a4-kube-api-access-75ngk\") pod \"ceilometer-0\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " pod="openstack/ceilometer-0" Jan 31 16:48:18 crc kubenswrapper[4730]: I0131 16:48:18.984077 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:48:19 crc kubenswrapper[4730]: I0131 16:48:19.328391 4730 generic.go:334] "Generic (PLEG): container finished" podID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerID="80ae24fe31870e02341eacd37399cd3d3009e58750f2e437dca5b64be6345b4d" exitCode=0 Jan 31 16:48:19 crc kubenswrapper[4730]: I0131 16:48:19.328455 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b5bd455fb-h66br" event={"ID":"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec","Type":"ContainerDied","Data":"80ae24fe31870e02341eacd37399cd3d3009e58750f2e437dca5b64be6345b4d"} Jan 31 16:48:19 crc kubenswrapper[4730]: I0131 16:48:19.328788 4730 scope.go:117] "RemoveContainer" containerID="5f76ea53478fba62d51bf2177248f8d97c1edacf725d569c9a1e0b691cca8300" Jan 31 16:48:19 crc kubenswrapper[4730]: I0131 16:48:19.329583 4730 scope.go:117] "RemoveContainer" containerID="896655c848b0a2b76d2a95800ef2fd1846e710c7780f0bfc559f734b6b875bd2" Jan 31 16:48:19 crc kubenswrapper[4730]: E0131 16:48:19.330086 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:48:19 crc kubenswrapper[4730]: I0131 16:48:19.429094 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:48:19 crc kubenswrapper[4730]: W0131 16:48:19.513658 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4abc3572_660b_4c33_ac87_9cb6593a92a4.slice/crio-77ae6c6307d94660a2447e76cc71bec3047518f9c3702e02f86140165bb701d8 WatchSource:0}: Error finding container 77ae6c6307d94660a2447e76cc71bec3047518f9c3702e02f86140165bb701d8: Status 404 returned error can't find the container with id 77ae6c6307d94660a2447e76cc71bec3047518f9c3702e02f86140165bb701d8 Jan 31 16:48:20 crc kubenswrapper[4730]: I0131 16:48:20.342498 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4abc3572-660b-4c33-ac87-9cb6593a92a4","Type":"ContainerStarted","Data":"af6c2addfd8a45667a9f7dec408961ad59708b07b760f89ab3fd8a66674094d7"} Jan 31 16:48:20 crc kubenswrapper[4730]: I0131 16:48:20.343969 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4abc3572-660b-4c33-ac87-9cb6593a92a4","Type":"ContainerStarted","Data":"77ae6c6307d94660a2447e76cc71bec3047518f9c3702e02f86140165bb701d8"} Jan 31 16:48:20 crc kubenswrapper[4730]: I0131 16:48:20.475991 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72340753-9253-4020-a57d-a7d3ae42a591" path="/var/lib/kubelet/pods/72340753-9253-4020-a57d-a7d3ae42a591/volumes" Jan 31 16:48:21 crc kubenswrapper[4730]: I0131 16:48:21.357857 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4abc3572-660b-4c33-ac87-9cb6593a92a4","Type":"ContainerStarted","Data":"176a361dfa4a240834ee6db556899e14c49f4fd8c287263515bbe327d4487e0b"} Jan 31 16:48:22 crc kubenswrapper[4730]: I0131 16:48:22.375423 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4abc3572-660b-4c33-ac87-9cb6593a92a4","Type":"ContainerStarted","Data":"ed878ce61ac9c7ded2ee20ea2134950c58807baf5fb8db9b7e2a4ffac2478d11"} Jan 31 16:48:24 crc kubenswrapper[4730]: I0131 16:48:24.669929 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:48:25 crc kubenswrapper[4730]: I0131 16:48:25.409215 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4abc3572-660b-4c33-ac87-9cb6593a92a4","Type":"ContainerStarted","Data":"335018c9652145b9a88d9342c4aec5b12feef1a64455c9d82e3a0cda51df3409"} Jan 31 16:48:25 crc kubenswrapper[4730]: I0131 16:48:25.409649 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 16:48:25 crc kubenswrapper[4730]: I0131 16:48:25.442073 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.66357811 podStartE2EDuration="7.44205315s" podCreationTimestamp="2026-01-31 16:48:18 +0000 UTC" firstStartedPulling="2026-01-31 16:48:19.515925375 +0000 UTC m=+1086.321982291" lastFinishedPulling="2026-01-31 16:48:24.294400415 +0000 UTC m=+1091.100457331" observedRunningTime="2026-01-31 16:48:25.434491237 +0000 UTC m=+1092.240548213" watchObservedRunningTime="2026-01-31 16:48:25.44205315 +0000 UTC m=+1092.248110066" Jan 31 16:48:25 crc kubenswrapper[4730]: I0131 16:48:25.662019 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:48:26 crc kubenswrapper[4730]: I0131 16:48:26.464506 4730 scope.go:117] "RemoveContainer" containerID="e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c" Jan 31 16:48:26 crc kubenswrapper[4730]: I0131 16:48:26.464580 4730 scope.go:117] "RemoveContainer" containerID="fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4" Jan 31 16:48:26 crc kubenswrapper[4730]: I0131 16:48:26.464668 4730 scope.go:117] "RemoveContainer" containerID="78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff" Jan 31 16:48:26 crc kubenswrapper[4730]: E0131 16:48:26.465015 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:48:26 crc kubenswrapper[4730]: I0131 16:48:26.734478 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-b5bd455fb-h66br" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 31 16:48:27 crc kubenswrapper[4730]: I0131 16:48:27.431737 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="1f3360e1f421204b7af9c6c32dc9ed3f548543f1cce4369ddb18b1d85fdb6ad2" exitCode=1 Jan 31 16:48:27 crc kubenswrapper[4730]: I0131 16:48:27.431897 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"1f3360e1f421204b7af9c6c32dc9ed3f548543f1cce4369ddb18b1d85fdb6ad2"} Jan 31 16:48:27 crc kubenswrapper[4730]: I0131 16:48:27.432036 4730 scope.go:117] "RemoveContainer" containerID="435927c74b967706fe7ebdbf1eac2e63fbd02dfb571e581ab2e5e21f1b4671f8" Jan 31 16:48:27 crc kubenswrapper[4730]: I0131 16:48:27.432693 4730 scope.go:117] "RemoveContainer" containerID="e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c" Jan 31 16:48:27 crc kubenswrapper[4730]: I0131 16:48:27.432752 4730 scope.go:117] "RemoveContainer" containerID="fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4" Jan 31 16:48:27 crc kubenswrapper[4730]: I0131 16:48:27.432837 4730 scope.go:117] "RemoveContainer" containerID="1f3360e1f421204b7af9c6c32dc9ed3f548543f1cce4369ddb18b1d85fdb6ad2" Jan 31 16:48:27 crc kubenswrapper[4730]: I0131 16:48:27.432861 4730 scope.go:117] "RemoveContainer" containerID="78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff" Jan 31 16:48:27 crc kubenswrapper[4730]: E0131 16:48:27.433150 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:48:27 crc kubenswrapper[4730]: I0131 16:48:27.665233 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:48:29 crc kubenswrapper[4730]: I0131 16:48:29.458682 4730 generic.go:334] "Generic (PLEG): container finished" podID="05019b69-099e-4b89-b072-ea6b1f2019e3" containerID="e9faa9b458cb34108e57efd0c24d388dbaa42765ffdbfb57bfabb208a5189567" exitCode=0 Jan 31 16:48:29 crc kubenswrapper[4730]: I0131 16:48:29.458783 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dvj5l" event={"ID":"05019b69-099e-4b89-b072-ea6b1f2019e3","Type":"ContainerDied","Data":"e9faa9b458cb34108e57efd0c24d388dbaa42765ffdbfb57bfabb208a5189567"} Jan 31 16:48:30 crc kubenswrapper[4730]: I0131 16:48:30.659924 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:48:30 crc kubenswrapper[4730]: I0131 16:48:30.660097 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:48:30 crc kubenswrapper[4730]: I0131 16:48:30.660647 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:48:30 crc kubenswrapper[4730]: I0131 16:48:30.661122 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"7c82501473fe44da233cb5f731a3ef4645d054a7dd345473f9e244a5bc551d74"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 16:48:30 crc kubenswrapper[4730]: I0131 16:48:30.661168 4730 scope.go:117] "RemoveContainer" containerID="896655c848b0a2b76d2a95800ef2fd1846e710c7780f0bfc559f734b6b875bd2" Jan 31 16:48:30 crc kubenswrapper[4730]: I0131 16:48:30.661225 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://7c82501473fe44da233cb5f731a3ef4645d054a7dd345473f9e244a5bc551d74" gracePeriod=30 Jan 31 16:48:30 crc kubenswrapper[4730]: I0131 16:48:30.664106 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:48:30 crc kubenswrapper[4730]: I0131 16:48:30.944705 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dvj5l" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.026860 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-scripts\") pod \"05019b69-099e-4b89-b072-ea6b1f2019e3\" (UID: \"05019b69-099e-4b89-b072-ea6b1f2019e3\") " Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.026912 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-combined-ca-bundle\") pod \"05019b69-099e-4b89-b072-ea6b1f2019e3\" (UID: \"05019b69-099e-4b89-b072-ea6b1f2019e3\") " Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.026963 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js484\" (UniqueName: \"kubernetes.io/projected/05019b69-099e-4b89-b072-ea6b1f2019e3-kube-api-access-js484\") pod \"05019b69-099e-4b89-b072-ea6b1f2019e3\" (UID: \"05019b69-099e-4b89-b072-ea6b1f2019e3\") " Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.027137 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-config-data\") pod \"05019b69-099e-4b89-b072-ea6b1f2019e3\" (UID: \"05019b69-099e-4b89-b072-ea6b1f2019e3\") " Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.045389 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05019b69-099e-4b89-b072-ea6b1f2019e3-kube-api-access-js484" (OuterVolumeSpecName: "kube-api-access-js484") pod "05019b69-099e-4b89-b072-ea6b1f2019e3" (UID: "05019b69-099e-4b89-b072-ea6b1f2019e3"). InnerVolumeSpecName "kube-api-access-js484". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.050192 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-scripts" (OuterVolumeSpecName: "scripts") pod "05019b69-099e-4b89-b072-ea6b1f2019e3" (UID: "05019b69-099e-4b89-b072-ea6b1f2019e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:31 crc kubenswrapper[4730]: E0131 16:48:31.058280 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.063856 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05019b69-099e-4b89-b072-ea6b1f2019e3" (UID: "05019b69-099e-4b89-b072-ea6b1f2019e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.072164 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-config-data" (OuterVolumeSpecName: "config-data") pod "05019b69-099e-4b89-b072-ea6b1f2019e3" (UID: "05019b69-099e-4b89-b072-ea6b1f2019e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.128989 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.129020 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.129031 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05019b69-099e-4b89-b072-ea6b1f2019e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.129040 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js484\" (UniqueName: \"kubernetes.io/projected/05019b69-099e-4b89-b072-ea6b1f2019e3-kube-api-access-js484\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.483527 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="7c82501473fe44da233cb5f731a3ef4645d054a7dd345473f9e244a5bc551d74" exitCode=0 Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.483649 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"7c82501473fe44da233cb5f731a3ef4645d054a7dd345473f9e244a5bc551d74"} Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.483714 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"84d40ecfafd585df45a30308ba3f8ff4f5ec4e8a5fb29b9578ee7d0795ac3414"} Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.483744 4730 scope.go:117] "RemoveContainer" containerID="6dd00137c2b55ba8911a6cf41645bd5bc9fe9443ee82beb8e8fc3780dbabffec" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.484144 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.484899 4730 scope.go:117] "RemoveContainer" containerID="7c82501473fe44da233cb5f731a3ef4645d054a7dd345473f9e244a5bc551d74" Jan 31 16:48:31 crc kubenswrapper[4730]: E0131 16:48:31.485489 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.502901 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dvj5l" event={"ID":"05019b69-099e-4b89-b072-ea6b1f2019e3","Type":"ContainerDied","Data":"5cb7f64590e5ac2a8f596b39040e4b0e9bc45547287f37742fa4a92fc23b7d59"} Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.502954 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cb7f64590e5ac2a8f596b39040e4b0e9bc45547287f37742fa4a92fc23b7d59" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.503054 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dvj5l" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.627254 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 16:48:31 crc kubenswrapper[4730]: E0131 16:48:31.627912 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05019b69-099e-4b89-b072-ea6b1f2019e3" containerName="nova-cell0-conductor-db-sync" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.627927 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="05019b69-099e-4b89-b072-ea6b1f2019e3" containerName="nova-cell0-conductor-db-sync" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.628089 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="05019b69-099e-4b89-b072-ea6b1f2019e3" containerName="nova-cell0-conductor-db-sync" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.628631 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.631092 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-flwwt" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.631521 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.640057 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.741181 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d10fc2fe-5518-491d-bc51-7f8a1c7c7885-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d10fc2fe-5518-491d-bc51-7f8a1c7c7885\") " pod="openstack/nova-cell0-conductor-0" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.741353 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9xst\" (UniqueName: \"kubernetes.io/projected/d10fc2fe-5518-491d-bc51-7f8a1c7c7885-kube-api-access-p9xst\") pod \"nova-cell0-conductor-0\" (UID: \"d10fc2fe-5518-491d-bc51-7f8a1c7c7885\") " pod="openstack/nova-cell0-conductor-0" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.741569 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d10fc2fe-5518-491d-bc51-7f8a1c7c7885-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d10fc2fe-5518-491d-bc51-7f8a1c7c7885\") " pod="openstack/nova-cell0-conductor-0" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.843157 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d10fc2fe-5518-491d-bc51-7f8a1c7c7885-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d10fc2fe-5518-491d-bc51-7f8a1c7c7885\") " pod="openstack/nova-cell0-conductor-0" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.843279 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d10fc2fe-5518-491d-bc51-7f8a1c7c7885-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d10fc2fe-5518-491d-bc51-7f8a1c7c7885\") " pod="openstack/nova-cell0-conductor-0" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.843329 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9xst\" (UniqueName: \"kubernetes.io/projected/d10fc2fe-5518-491d-bc51-7f8a1c7c7885-kube-api-access-p9xst\") pod \"nova-cell0-conductor-0\" (UID: \"d10fc2fe-5518-491d-bc51-7f8a1c7c7885\") " pod="openstack/nova-cell0-conductor-0" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.848083 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d10fc2fe-5518-491d-bc51-7f8a1c7c7885-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d10fc2fe-5518-491d-bc51-7f8a1c7c7885\") " pod="openstack/nova-cell0-conductor-0" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.848368 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d10fc2fe-5518-491d-bc51-7f8a1c7c7885-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d10fc2fe-5518-491d-bc51-7f8a1c7c7885\") " pod="openstack/nova-cell0-conductor-0" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.868882 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9xst\" (UniqueName: \"kubernetes.io/projected/d10fc2fe-5518-491d-bc51-7f8a1c7c7885-kube-api-access-p9xst\") pod \"nova-cell0-conductor-0\" (UID: \"d10fc2fe-5518-491d-bc51-7f8a1c7c7885\") " pod="openstack/nova-cell0-conductor-0" Jan 31 16:48:31 crc kubenswrapper[4730]: I0131 16:48:31.947789 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 16:48:32 crc kubenswrapper[4730]: I0131 16:48:32.389840 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 16:48:32 crc kubenswrapper[4730]: I0131 16:48:32.518276 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="84d40ecfafd585df45a30308ba3f8ff4f5ec4e8a5fb29b9578ee7d0795ac3414" exitCode=1 Jan 31 16:48:32 crc kubenswrapper[4730]: I0131 16:48:32.518384 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"84d40ecfafd585df45a30308ba3f8ff4f5ec4e8a5fb29b9578ee7d0795ac3414"} Jan 31 16:48:32 crc kubenswrapper[4730]: I0131 16:48:32.518717 4730 scope.go:117] "RemoveContainer" containerID="896655c848b0a2b76d2a95800ef2fd1846e710c7780f0bfc559f734b6b875bd2" Jan 31 16:48:32 crc kubenswrapper[4730]: I0131 16:48:32.519240 4730 scope.go:117] "RemoveContainer" containerID="7c82501473fe44da233cb5f731a3ef4645d054a7dd345473f9e244a5bc551d74" Jan 31 16:48:32 crc kubenswrapper[4730]: I0131 16:48:32.519264 4730 scope.go:117] "RemoveContainer" containerID="84d40ecfafd585df45a30308ba3f8ff4f5ec4e8a5fb29b9578ee7d0795ac3414" Jan 31 16:48:32 crc kubenswrapper[4730]: E0131 16:48:32.519495 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:48:32 crc kubenswrapper[4730]: I0131 16:48:32.521854 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d10fc2fe-5518-491d-bc51-7f8a1c7c7885","Type":"ContainerStarted","Data":"2fc75747493b47003554f880b6502abe38758d01e7ca2d2efdca27a958d0c1e6"} Jan 31 16:48:33 crc kubenswrapper[4730]: I0131 16:48:33.540830 4730 scope.go:117] "RemoveContainer" containerID="7c82501473fe44da233cb5f731a3ef4645d054a7dd345473f9e244a5bc551d74" Jan 31 16:48:33 crc kubenswrapper[4730]: I0131 16:48:33.541135 4730 scope.go:117] "RemoveContainer" containerID="84d40ecfafd585df45a30308ba3f8ff4f5ec4e8a5fb29b9578ee7d0795ac3414" Jan 31 16:48:33 crc kubenswrapper[4730]: E0131 16:48:33.541500 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:48:33 crc kubenswrapper[4730]: I0131 16:48:33.545040 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d10fc2fe-5518-491d-bc51-7f8a1c7c7885","Type":"ContainerStarted","Data":"4e1931413884358e106f52750a03bb2bb06fa81d776232bd78ef457b77edc736"} Jan 31 16:48:33 crc kubenswrapper[4730]: I0131 16:48:33.545739 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 31 16:48:33 crc kubenswrapper[4730]: I0131 16:48:33.594061 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.594038983 podStartE2EDuration="2.594038983s" podCreationTimestamp="2026-01-31 16:48:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:48:33.583764184 +0000 UTC m=+1100.389821100" watchObservedRunningTime="2026-01-31 16:48:33.594038983 +0000 UTC m=+1100.400095919" Jan 31 16:48:33 crc kubenswrapper[4730]: I0131 16:48:33.653783 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:48:34 crc kubenswrapper[4730]: I0131 16:48:34.556652 4730 scope.go:117] "RemoveContainer" containerID="7c82501473fe44da233cb5f731a3ef4645d054a7dd345473f9e244a5bc551d74" Jan 31 16:48:34 crc kubenswrapper[4730]: I0131 16:48:34.556695 4730 scope.go:117] "RemoveContainer" containerID="84d40ecfafd585df45a30308ba3f8ff4f5ec4e8a5fb29b9578ee7d0795ac3414" Jan 31 16:48:34 crc kubenswrapper[4730]: E0131 16:48:34.557124 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:48:36 crc kubenswrapper[4730]: I0131 16:48:36.733779 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-b5bd455fb-h66br" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 31 16:48:41 crc kubenswrapper[4730]: I0131 16:48:41.465890 4730 scope.go:117] "RemoveContainer" containerID="e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c" Jan 31 16:48:41 crc kubenswrapper[4730]: I0131 16:48:41.466738 4730 scope.go:117] "RemoveContainer" containerID="fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4" Jan 31 16:48:41 crc kubenswrapper[4730]: I0131 16:48:41.467088 4730 scope.go:117] "RemoveContainer" containerID="1f3360e1f421204b7af9c6c32dc9ed3f548543f1cce4369ddb18b1d85fdb6ad2" Jan 31 16:48:41 crc kubenswrapper[4730]: I0131 16:48:41.467150 4730 scope.go:117] "RemoveContainer" containerID="78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff" Jan 31 16:48:41 crc kubenswrapper[4730]: E0131 16:48:41.753079 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:48:41 crc kubenswrapper[4730]: I0131 16:48:41.985302 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.633461 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-hbl4w"] Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.634793 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hbl4w" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.640940 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.641350 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.645016 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hbl4w"] Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.661474 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"d726a69f5e2dfff30e76809ee957e2e6becec83d862908ee3262df8ae2b25070"} Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.662290 4730 scope.go:117] "RemoveContainer" containerID="e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.662361 4730 scope.go:117] "RemoveContainer" containerID="fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.662459 4730 scope.go:117] "RemoveContainer" containerID="78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff" Jan 31 16:48:42 crc kubenswrapper[4730]: E0131 16:48:42.662763 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.785250 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-config-data\") pod \"nova-cell0-cell-mapping-hbl4w\" (UID: \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\") " pod="openstack/nova-cell0-cell-mapping-hbl4w" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.785387 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hbl4w\" (UID: \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\") " pod="openstack/nova-cell0-cell-mapping-hbl4w" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.785414 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5j5j\" (UniqueName: \"kubernetes.io/projected/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-kube-api-access-n5j5j\") pod \"nova-cell0-cell-mapping-hbl4w\" (UID: \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\") " pod="openstack/nova-cell0-cell-mapping-hbl4w" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.785439 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-scripts\") pod \"nova-cell0-cell-mapping-hbl4w\" (UID: \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\") " pod="openstack/nova-cell0-cell-mapping-hbl4w" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.834230 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.835656 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.847372 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.855370 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.887936 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.889114 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-scripts\") pod \"nova-cell0-cell-mapping-hbl4w\" (UID: \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\") " pod="openstack/nova-cell0-cell-mapping-hbl4w" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.889219 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-config-data\") pod \"nova-cell0-cell-mapping-hbl4w\" (UID: \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\") " pod="openstack/nova-cell0-cell-mapping-hbl4w" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.889315 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hbl4w\" (UID: \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\") " pod="openstack/nova-cell0-cell-mapping-hbl4w" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.889345 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5j5j\" (UniqueName: \"kubernetes.io/projected/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-kube-api-access-n5j5j\") pod \"nova-cell0-cell-mapping-hbl4w\" (UID: \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\") " pod="openstack/nova-cell0-cell-mapping-hbl4w" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.889445 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.918156 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.965525 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hbl4w\" (UID: \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\") " pod="openstack/nova-cell0-cell-mapping-hbl4w" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.966183 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-scripts\") pod \"nova-cell0-cell-mapping-hbl4w\" (UID: \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\") " pod="openstack/nova-cell0-cell-mapping-hbl4w" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.967833 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-config-data\") pod \"nova-cell0-cell-mapping-hbl4w\" (UID: \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\") " pod="openstack/nova-cell0-cell-mapping-hbl4w" Jan 31 16:48:42 crc kubenswrapper[4730]: I0131 16:48:42.970273 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5j5j\" (UniqueName: \"kubernetes.io/projected/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-kube-api-access-n5j5j\") pod \"nova-cell0-cell-mapping-hbl4w\" (UID: \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\") " pod="openstack/nova-cell0-cell-mapping-hbl4w" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.011083 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hbl4w" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.013848 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zqfb\" (UniqueName: \"kubernetes.io/projected/575160a7-8757-4da4-9eec-9cc6158c7d45-kube-api-access-5zqfb\") pod \"nova-api-0\" (UID: \"575160a7-8757-4da4-9eec-9cc6158c7d45\") " pod="openstack/nova-api-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.014090 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/575160a7-8757-4da4-9eec-9cc6158c7d45-config-data\") pod \"nova-api-0\" (UID: \"575160a7-8757-4da4-9eec-9cc6158c7d45\") " pod="openstack/nova-api-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.014116 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd25f1e4-9703-430d-96e1-9dc82dbcde4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.014165 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd25f1e4-9703-430d-96e1-9dc82dbcde4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.014242 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/575160a7-8757-4da4-9eec-9cc6158c7d45-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"575160a7-8757-4da4-9eec-9cc6158c7d45\") " pod="openstack/nova-api-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.014341 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/575160a7-8757-4da4-9eec-9cc6158c7d45-logs\") pod \"nova-api-0\" (UID: \"575160a7-8757-4da4-9eec-9cc6158c7d45\") " pod="openstack/nova-api-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.014404 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x846z\" (UniqueName: \"kubernetes.io/projected/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-kube-api-access-x846z\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd25f1e4-9703-430d-96e1-9dc82dbcde4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.042199 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.115501 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/575160a7-8757-4da4-9eec-9cc6158c7d45-config-data\") pod \"nova-api-0\" (UID: \"575160a7-8757-4da4-9eec-9cc6158c7d45\") " pod="openstack/nova-api-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.115560 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd25f1e4-9703-430d-96e1-9dc82dbcde4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.115583 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd25f1e4-9703-430d-96e1-9dc82dbcde4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.115624 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/575160a7-8757-4da4-9eec-9cc6158c7d45-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"575160a7-8757-4da4-9eec-9cc6158c7d45\") " pod="openstack/nova-api-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.115672 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/575160a7-8757-4da4-9eec-9cc6158c7d45-logs\") pod \"nova-api-0\" (UID: \"575160a7-8757-4da4-9eec-9cc6158c7d45\") " pod="openstack/nova-api-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.115702 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x846z\" (UniqueName: \"kubernetes.io/projected/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-kube-api-access-x846z\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd25f1e4-9703-430d-96e1-9dc82dbcde4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.115730 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zqfb\" (UniqueName: \"kubernetes.io/projected/575160a7-8757-4da4-9eec-9cc6158c7d45-kube-api-access-5zqfb\") pod \"nova-api-0\" (UID: \"575160a7-8757-4da4-9eec-9cc6158c7d45\") " pod="openstack/nova-api-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.126096 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd25f1e4-9703-430d-96e1-9dc82dbcde4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.126373 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/575160a7-8757-4da4-9eec-9cc6158c7d45-logs\") pod \"nova-api-0\" (UID: \"575160a7-8757-4da4-9eec-9cc6158c7d45\") " pod="openstack/nova-api-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.128138 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/575160a7-8757-4da4-9eec-9cc6158c7d45-config-data\") pod \"nova-api-0\" (UID: \"575160a7-8757-4da4-9eec-9cc6158c7d45\") " pod="openstack/nova-api-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.137531 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd25f1e4-9703-430d-96e1-9dc82dbcde4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.137612 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.138818 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.142625 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/575160a7-8757-4da4-9eec-9cc6158c7d45-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"575160a7-8757-4da4-9eec-9cc6158c7d45\") " pod="openstack/nova-api-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.148999 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.175220 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zqfb\" (UniqueName: \"kubernetes.io/projected/575160a7-8757-4da4-9eec-9cc6158c7d45-kube-api-access-5zqfb\") pod \"nova-api-0\" (UID: \"575160a7-8757-4da4-9eec-9cc6158c7d45\") " pod="openstack/nova-api-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.179523 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x846z\" (UniqueName: \"kubernetes.io/projected/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-kube-api-access-x846z\") pod \"nova-cell1-novncproxy-0\" (UID: \"dd25f1e4-9703-430d-96e1-9dc82dbcde4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.180226 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.211014 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.212634 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.228152 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.228527 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.256131 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56d99cc479-v686n"] Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.257648 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.265856 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56d99cc479-v686n"] Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.324874 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/409f06cc-0b07-4015-8dbf-0d25c902b15f-config-data\") pod \"nova-metadata-0\" (UID: \"409f06cc-0b07-4015-8dbf-0d25c902b15f\") " pod="openstack/nova-metadata-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.325928 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de660e39-bb4a-4e40-bcd8-d87354323cc4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"de660e39-bb4a-4e40-bcd8-d87354323cc4\") " pod="openstack/nova-scheduler-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.325972 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409f06cc-0b07-4015-8dbf-0d25c902b15f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"409f06cc-0b07-4015-8dbf-0d25c902b15f\") " pod="openstack/nova-metadata-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.326017 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc7vr\" (UniqueName: \"kubernetes.io/projected/de660e39-bb4a-4e40-bcd8-d87354323cc4-kube-api-access-dc7vr\") pod \"nova-scheduler-0\" (UID: \"de660e39-bb4a-4e40-bcd8-d87354323cc4\") " pod="openstack/nova-scheduler-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.326085 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvsrj\" (UniqueName: \"kubernetes.io/projected/409f06cc-0b07-4015-8dbf-0d25c902b15f-kube-api-access-gvsrj\") pod \"nova-metadata-0\" (UID: \"409f06cc-0b07-4015-8dbf-0d25c902b15f\") " pod="openstack/nova-metadata-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.326169 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de660e39-bb4a-4e40-bcd8-d87354323cc4-config-data\") pod \"nova-scheduler-0\" (UID: \"de660e39-bb4a-4e40-bcd8-d87354323cc4\") " pod="openstack/nova-scheduler-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.326767 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/409f06cc-0b07-4015-8dbf-0d25c902b15f-logs\") pod \"nova-metadata-0\" (UID: \"409f06cc-0b07-4015-8dbf-0d25c902b15f\") " pod="openstack/nova-metadata-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.428494 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc7vr\" (UniqueName: \"kubernetes.io/projected/de660e39-bb4a-4e40-bcd8-d87354323cc4-kube-api-access-dc7vr\") pod \"nova-scheduler-0\" (UID: \"de660e39-bb4a-4e40-bcd8-d87354323cc4\") " pod="openstack/nova-scheduler-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.428545 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c74d4\" (UniqueName: \"kubernetes.io/projected/fb0d8830-2b7d-4646-9973-9f72e59222bc-kube-api-access-c74d4\") pod \"dnsmasq-dns-56d99cc479-v686n\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.428596 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvsrj\" (UniqueName: \"kubernetes.io/projected/409f06cc-0b07-4015-8dbf-0d25c902b15f-kube-api-access-gvsrj\") pod \"nova-metadata-0\" (UID: \"409f06cc-0b07-4015-8dbf-0d25c902b15f\") " pod="openstack/nova-metadata-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.428619 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-ovsdbserver-sb\") pod \"dnsmasq-dns-56d99cc479-v686n\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.428840 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-dns-svc\") pod \"dnsmasq-dns-56d99cc479-v686n\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.428894 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de660e39-bb4a-4e40-bcd8-d87354323cc4-config-data\") pod \"nova-scheduler-0\" (UID: \"de660e39-bb4a-4e40-bcd8-d87354323cc4\") " pod="openstack/nova-scheduler-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.429581 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/409f06cc-0b07-4015-8dbf-0d25c902b15f-logs\") pod \"nova-metadata-0\" (UID: \"409f06cc-0b07-4015-8dbf-0d25c902b15f\") " pod="openstack/nova-metadata-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.429659 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/409f06cc-0b07-4015-8dbf-0d25c902b15f-config-data\") pod \"nova-metadata-0\" (UID: \"409f06cc-0b07-4015-8dbf-0d25c902b15f\") " pod="openstack/nova-metadata-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.429732 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-config\") pod \"dnsmasq-dns-56d99cc479-v686n\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.429791 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de660e39-bb4a-4e40-bcd8-d87354323cc4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"de660e39-bb4a-4e40-bcd8-d87354323cc4\") " pod="openstack/nova-scheduler-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.429851 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409f06cc-0b07-4015-8dbf-0d25c902b15f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"409f06cc-0b07-4015-8dbf-0d25c902b15f\") " pod="openstack/nova-metadata-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.429879 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-ovsdbserver-nb\") pod \"dnsmasq-dns-56d99cc479-v686n\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.431140 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/409f06cc-0b07-4015-8dbf-0d25c902b15f-logs\") pod \"nova-metadata-0\" (UID: \"409f06cc-0b07-4015-8dbf-0d25c902b15f\") " pod="openstack/nova-metadata-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.435501 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/409f06cc-0b07-4015-8dbf-0d25c902b15f-config-data\") pod \"nova-metadata-0\" (UID: \"409f06cc-0b07-4015-8dbf-0d25c902b15f\") " pod="openstack/nova-metadata-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.436298 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de660e39-bb4a-4e40-bcd8-d87354323cc4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"de660e39-bb4a-4e40-bcd8-d87354323cc4\") " pod="openstack/nova-scheduler-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.443349 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de660e39-bb4a-4e40-bcd8-d87354323cc4-config-data\") pod \"nova-scheduler-0\" (UID: \"de660e39-bb4a-4e40-bcd8-d87354323cc4\") " pod="openstack/nova-scheduler-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.443833 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409f06cc-0b07-4015-8dbf-0d25c902b15f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"409f06cc-0b07-4015-8dbf-0d25c902b15f\") " pod="openstack/nova-metadata-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.447619 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc7vr\" (UniqueName: \"kubernetes.io/projected/de660e39-bb4a-4e40-bcd8-d87354323cc4-kube-api-access-dc7vr\") pod \"nova-scheduler-0\" (UID: \"de660e39-bb4a-4e40-bcd8-d87354323cc4\") " pod="openstack/nova-scheduler-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.450972 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvsrj\" (UniqueName: \"kubernetes.io/projected/409f06cc-0b07-4015-8dbf-0d25c902b15f-kube-api-access-gvsrj\") pod \"nova-metadata-0\" (UID: \"409f06cc-0b07-4015-8dbf-0d25c902b15f\") " pod="openstack/nova-metadata-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.465198 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.476492 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.513198 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.515152 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hbl4w"] Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.534181 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-dns-svc\") pod \"dnsmasq-dns-56d99cc479-v686n\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.534526 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-config\") pod \"dnsmasq-dns-56d99cc479-v686n\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.534566 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-ovsdbserver-nb\") pod \"dnsmasq-dns-56d99cc479-v686n\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.534607 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c74d4\" (UniqueName: \"kubernetes.io/projected/fb0d8830-2b7d-4646-9973-9f72e59222bc-kube-api-access-c74d4\") pod \"dnsmasq-dns-56d99cc479-v686n\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.534653 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-ovsdbserver-sb\") pod \"dnsmasq-dns-56d99cc479-v686n\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.535533 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-ovsdbserver-sb\") pod \"dnsmasq-dns-56d99cc479-v686n\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.535761 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-config\") pod \"dnsmasq-dns-56d99cc479-v686n\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.536076 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-ovsdbserver-nb\") pod \"dnsmasq-dns-56d99cc479-v686n\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.536326 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-dns-svc\") pod \"dnsmasq-dns-56d99cc479-v686n\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.553059 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.567320 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c74d4\" (UniqueName: \"kubernetes.io/projected/fb0d8830-2b7d-4646-9973-9f72e59222bc-kube-api-access-c74d4\") pod \"dnsmasq-dns-56d99cc479-v686n\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.583616 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.679293 4730 generic.go:334] "Generic (PLEG): container finished" podID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerID="31c3f1d338e9abdfe52a8ea48e754f02a316f206eec6752e7c454b2a52955b20" exitCode=137 Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.679454 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b5bd455fb-h66br" event={"ID":"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec","Type":"ContainerDied","Data":"31c3f1d338e9abdfe52a8ea48e754f02a316f206eec6752e7c454b2a52955b20"} Jan 31 16:48:43 crc kubenswrapper[4730]: I0131 16:48:43.680476 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hbl4w" event={"ID":"638775e1-f41e-4dd4-a0b3-0a77ceccd15b","Type":"ContainerStarted","Data":"e9d22c892eeea3925f669d6025ea6a2905965c25636e9a9d2cf6166783711d08"} Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.115287 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.164505 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-scripts\") pod \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.164554 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-horizon-tls-certs\") pod \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.164606 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-logs\") pod \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.164660 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sgxt\" (UniqueName: \"kubernetes.io/projected/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-kube-api-access-4sgxt\") pod \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.164734 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-combined-ca-bundle\") pod \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.164830 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-config-data\") pod \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.164898 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-horizon-secret-key\") pod \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\" (UID: \"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec\") " Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.167225 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-logs" (OuterVolumeSpecName: "logs") pod "690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" (UID: "690b58d4-36db-4d31-a09b-d0d7dcc0e2ec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.191247 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" (UID: "690b58d4-36db-4d31-a09b-d0d7dcc0e2ec"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.195790 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-kube-api-access-4sgxt" (OuterVolumeSpecName: "kube-api-access-4sgxt") pod "690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" (UID: "690b58d4-36db-4d31-a09b-d0d7dcc0e2ec"). InnerVolumeSpecName "kube-api-access-4sgxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.233340 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-scripts" (OuterVolumeSpecName: "scripts") pod "690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" (UID: "690b58d4-36db-4d31-a09b-d0d7dcc0e2ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.266788 4730 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.266830 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.266840 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.266848 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sgxt\" (UniqueName: \"kubernetes.io/projected/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-kube-api-access-4sgxt\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.297564 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" (UID: "690b58d4-36db-4d31-a09b-d0d7dcc0e2ec"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.298253 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-config-data" (OuterVolumeSpecName: "config-data") pod "690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" (UID: "690b58d4-36db-4d31-a09b-d0d7dcc0e2ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.333664 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" (UID: "690b58d4-36db-4d31-a09b-d0d7dcc0e2ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.367944 4730 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.368136 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.368214 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.444381 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.558827 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-b9fwh"] Jan 31 16:48:44 crc kubenswrapper[4730]: E0131 16:48:44.559386 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon-log" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.559436 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon-log" Jan 31 16:48:44 crc kubenswrapper[4730]: E0131 16:48:44.559449 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.559509 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon" Jan 31 16:48:44 crc kubenswrapper[4730]: E0131 16:48:44.559546 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.559554 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.561271 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon-log" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.561318 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.561328 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" containerName="horizon" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.561956 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-b9fwh" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.571474 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.572204 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.590668 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-b9fwh"] Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.632767 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 16:48:44 crc kubenswrapper[4730]: W0131 16:48:44.645679 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod575160a7_8757_4da4_9eec_9cc6158c7d45.slice/crio-56f94d89a8db9acdf4421388df801e65f2e88ca3b564625014d21b47e0d5e0b5 WatchSource:0}: Error finding container 56f94d89a8db9acdf4421388df801e65f2e88ca3b564625014d21b47e0d5e0b5: Status 404 returned error can't find the container with id 56f94d89a8db9acdf4421388df801e65f2e88ca3b564625014d21b47e0d5e0b5 Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.647595 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.674326 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-b9fwh\" (UID: \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\") " pod="openstack/nova-cell1-conductor-db-sync-b9fwh" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.674476 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-config-data\") pod \"nova-cell1-conductor-db-sync-b9fwh\" (UID: \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\") " pod="openstack/nova-cell1-conductor-db-sync-b9fwh" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.674535 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qspx\" (UniqueName: \"kubernetes.io/projected/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-kube-api-access-5qspx\") pod \"nova-cell1-conductor-db-sync-b9fwh\" (UID: \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\") " pod="openstack/nova-cell1-conductor-db-sync-b9fwh" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.674642 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-scripts\") pod \"nova-cell1-conductor-db-sync-b9fwh\" (UID: \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\") " pod="openstack/nova-cell1-conductor-db-sync-b9fwh" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.704949 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hbl4w" event={"ID":"638775e1-f41e-4dd4-a0b3-0a77ceccd15b","Type":"ContainerStarted","Data":"ddf97d903b360d8d8e881549e0bc9e812fb3a927b2fd766ecb3ce83d053ebff4"} Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.706133 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"575160a7-8757-4da4-9eec-9cc6158c7d45","Type":"ContainerStarted","Data":"56f94d89a8db9acdf4421388df801e65f2e88ca3b564625014d21b47e0d5e0b5"} Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.707580 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dd25f1e4-9703-430d-96e1-9dc82dbcde4b","Type":"ContainerStarted","Data":"1b77a561646cc427ef480dc3dc10e712cf10d1c89e68c543d6282d02d0c31893"} Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.713929 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"de660e39-bb4a-4e40-bcd8-d87354323cc4","Type":"ContainerStarted","Data":"8e0553fcf7ccf4017fafe7038eea4eb0e4de7d2c7c4af9fbd75449c566c5e2a3"} Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.723096 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b5bd455fb-h66br" event={"ID":"690b58d4-36db-4d31-a09b-d0d7dcc0e2ec","Type":"ContainerDied","Data":"8e032e10479dda715828c80666b92089178a3b27ca2130404eb55c1f9d258d72"} Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.723141 4730 scope.go:117] "RemoveContainer" containerID="80ae24fe31870e02341eacd37399cd3d3009e58750f2e437dca5b64be6345b4d" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.723166 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b5bd455fb-h66br" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.764920 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-hbl4w" podStartSLOduration=2.764894248 podStartE2EDuration="2.764894248s" podCreationTimestamp="2026-01-31 16:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:48:44.725855256 +0000 UTC m=+1111.531912172" watchObservedRunningTime="2026-01-31 16:48:44.764894248 +0000 UTC m=+1111.570951164" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.776259 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b5bd455fb-h66br"] Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.778266 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-b9fwh\" (UID: \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\") " pod="openstack/nova-cell1-conductor-db-sync-b9fwh" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.778494 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-config-data\") pod \"nova-cell1-conductor-db-sync-b9fwh\" (UID: \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\") " pod="openstack/nova-cell1-conductor-db-sync-b9fwh" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.778574 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qspx\" (UniqueName: \"kubernetes.io/projected/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-kube-api-access-5qspx\") pod \"nova-cell1-conductor-db-sync-b9fwh\" (UID: \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\") " pod="openstack/nova-cell1-conductor-db-sync-b9fwh" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.778718 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-scripts\") pod \"nova-cell1-conductor-db-sync-b9fwh\" (UID: \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\") " pod="openstack/nova-cell1-conductor-db-sync-b9fwh" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.789760 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-b5bd455fb-h66br"] Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.790579 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-scripts\") pod \"nova-cell1-conductor-db-sync-b9fwh\" (UID: \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\") " pod="openstack/nova-cell1-conductor-db-sync-b9fwh" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.794388 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-config-data\") pod \"nova-cell1-conductor-db-sync-b9fwh\" (UID: \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\") " pod="openstack/nova-cell1-conductor-db-sync-b9fwh" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.796447 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-b9fwh\" (UID: \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\") " pod="openstack/nova-cell1-conductor-db-sync-b9fwh" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.800731 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qspx\" (UniqueName: \"kubernetes.io/projected/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-kube-api-access-5qspx\") pod \"nova-cell1-conductor-db-sync-b9fwh\" (UID: \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\") " pod="openstack/nova-cell1-conductor-db-sync-b9fwh" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.827570 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56d99cc479-v686n"] Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.835153 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.889878 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-b9fwh" Jan 31 16:48:44 crc kubenswrapper[4730]: I0131 16:48:44.914527 4730 scope.go:117] "RemoveContainer" containerID="31c3f1d338e9abdfe52a8ea48e754f02a316f206eec6752e7c454b2a52955b20" Jan 31 16:48:45 crc kubenswrapper[4730]: I0131 16:48:45.520196 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-b9fwh"] Jan 31 16:48:45 crc kubenswrapper[4730]: I0131 16:48:45.733543 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"409f06cc-0b07-4015-8dbf-0d25c902b15f","Type":"ContainerStarted","Data":"6510dec68d33891ee9f73bce271a24a09b458219044daacdcde9b594877cc2e2"} Jan 31 16:48:45 crc kubenswrapper[4730]: I0131 16:48:45.741859 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-b9fwh" event={"ID":"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714","Type":"ContainerStarted","Data":"e954fb1e58e04fd87a8d97192bfcefceec3c6883233869e284d35b469862b1a2"} Jan 31 16:48:45 crc kubenswrapper[4730]: I0131 16:48:45.743505 4730 generic.go:334] "Generic (PLEG): container finished" podID="fb0d8830-2b7d-4646-9973-9f72e59222bc" containerID="7247ad9a1c5d1da5f50bf9cf47f358cc3f7973abe8066ae7a04b1940b435ed3e" exitCode=0 Jan 31 16:48:45 crc kubenswrapper[4730]: I0131 16:48:45.743591 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d99cc479-v686n" event={"ID":"fb0d8830-2b7d-4646-9973-9f72e59222bc","Type":"ContainerDied","Data":"7247ad9a1c5d1da5f50bf9cf47f358cc3f7973abe8066ae7a04b1940b435ed3e"} Jan 31 16:48:45 crc kubenswrapper[4730]: I0131 16:48:45.743654 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d99cc479-v686n" event={"ID":"fb0d8830-2b7d-4646-9973-9f72e59222bc","Type":"ContainerStarted","Data":"4f3a4d20a5baaddd798296b977ac8bb567f5acfcd35aa38ae935787150e21b80"} Jan 31 16:48:46 crc kubenswrapper[4730]: I0131 16:48:46.476825 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="690b58d4-36db-4d31-a09b-d0d7dcc0e2ec" path="/var/lib/kubelet/pods/690b58d4-36db-4d31-a09b-d0d7dcc0e2ec/volumes" Jan 31 16:48:46 crc kubenswrapper[4730]: I0131 16:48:46.760220 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-b9fwh" event={"ID":"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714","Type":"ContainerStarted","Data":"b412524c906028320ad3e4ff45adbedff39a1d3e9259b1050ba25cc562ed465e"} Jan 31 16:48:46 crc kubenswrapper[4730]: I0131 16:48:46.781124 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d99cc479-v686n" event={"ID":"fb0d8830-2b7d-4646-9973-9f72e59222bc","Type":"ContainerStarted","Data":"0b5703d3ce0ea318286f6b16d7d34bdca84447492bffca251f523dd5b1a385f7"} Jan 31 16:48:46 crc kubenswrapper[4730]: I0131 16:48:46.781421 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:46 crc kubenswrapper[4730]: I0131 16:48:46.794571 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-b9fwh" podStartSLOduration=2.794551264 podStartE2EDuration="2.794551264s" podCreationTimestamp="2026-01-31 16:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:48:46.786224006 +0000 UTC m=+1113.592280922" watchObservedRunningTime="2026-01-31 16:48:46.794551264 +0000 UTC m=+1113.600608200" Jan 31 16:48:46 crc kubenswrapper[4730]: I0131 16:48:46.816544 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56d99cc479-v686n" podStartSLOduration=3.816527698 podStartE2EDuration="3.816527698s" podCreationTimestamp="2026-01-31 16:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:48:46.806492402 +0000 UTC m=+1113.612549318" watchObservedRunningTime="2026-01-31 16:48:46.816527698 +0000 UTC m=+1113.622584614" Jan 31 16:48:46 crc kubenswrapper[4730]: I0131 16:48:46.858650 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:48:46 crc kubenswrapper[4730]: I0131 16:48:46.900858 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 16:48:47 crc kubenswrapper[4730]: I0131 16:48:47.463871 4730 scope.go:117] "RemoveContainer" containerID="7c82501473fe44da233cb5f731a3ef4645d054a7dd345473f9e244a5bc551d74" Jan 31 16:48:47 crc kubenswrapper[4730]: I0131 16:48:47.463893 4730 scope.go:117] "RemoveContainer" containerID="84d40ecfafd585df45a30308ba3f8ff4f5ec4e8a5fb29b9578ee7d0795ac3414" Jan 31 16:48:47 crc kubenswrapper[4730]: E0131 16:48:47.464104 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:48:48 crc kubenswrapper[4730]: I0131 16:48:48.799640 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"de660e39-bb4a-4e40-bcd8-d87354323cc4","Type":"ContainerStarted","Data":"240797cf5de075f5e97ebd19ae55537902c7110b3b4d240da8a68db5460a2c9e"} Jan 31 16:48:48 crc kubenswrapper[4730]: I0131 16:48:48.802148 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"409f06cc-0b07-4015-8dbf-0d25c902b15f","Type":"ContainerStarted","Data":"2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad"} Jan 31 16:48:48 crc kubenswrapper[4730]: I0131 16:48:48.804491 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"575160a7-8757-4da4-9eec-9cc6158c7d45","Type":"ContainerStarted","Data":"f24805d4da432fbdef8e92f6ea7b99fa76f42a43e84512cda9eb3c37de5d161f"} Jan 31 16:48:48 crc kubenswrapper[4730]: I0131 16:48:48.808795 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dd25f1e4-9703-430d-96e1-9dc82dbcde4b","Type":"ContainerStarted","Data":"9df3494dbc8ab0d2849a426c10a448e7321e328b43b324a3041627db3a43b0c0"} Jan 31 16:48:48 crc kubenswrapper[4730]: I0131 16:48:48.808943 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="dd25f1e4-9703-430d-96e1-9dc82dbcde4b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9df3494dbc8ab0d2849a426c10a448e7321e328b43b324a3041627db3a43b0c0" gracePeriod=30 Jan 31 16:48:48 crc kubenswrapper[4730]: I0131 16:48:48.823971 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.94526582 podStartE2EDuration="5.823952954s" podCreationTimestamp="2026-01-31 16:48:43 +0000 UTC" firstStartedPulling="2026-01-31 16:48:44.445606129 +0000 UTC m=+1111.251663045" lastFinishedPulling="2026-01-31 16:48:48.324293263 +0000 UTC m=+1115.130350179" observedRunningTime="2026-01-31 16:48:48.8201869 +0000 UTC m=+1115.626243816" watchObservedRunningTime="2026-01-31 16:48:48.823952954 +0000 UTC m=+1115.630009870" Jan 31 16:48:48 crc kubenswrapper[4730]: I0131 16:48:48.844073 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.151290148 podStartE2EDuration="6.844057156s" podCreationTimestamp="2026-01-31 16:48:42 +0000 UTC" firstStartedPulling="2026-01-31 16:48:44.63168966 +0000 UTC m=+1111.437746576" lastFinishedPulling="2026-01-31 16:48:48.324456668 +0000 UTC m=+1115.130513584" observedRunningTime="2026-01-31 16:48:48.839420829 +0000 UTC m=+1115.645477745" watchObservedRunningTime="2026-01-31 16:48:48.844057156 +0000 UTC m=+1115.650114072" Jan 31 16:48:49 crc kubenswrapper[4730]: I0131 16:48:49.001205 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 16:48:49 crc kubenswrapper[4730]: I0131 16:48:49.819840 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"575160a7-8757-4da4-9eec-9cc6158c7d45","Type":"ContainerStarted","Data":"f3f0e2838d17b0ac2f3cf99645602bacbc2a34dbce680b6b2404a5d86cee155b"} Jan 31 16:48:49 crc kubenswrapper[4730]: I0131 16:48:49.822770 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="409f06cc-0b07-4015-8dbf-0d25c902b15f" containerName="nova-metadata-log" containerID="cri-o://2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad" gracePeriod=30 Jan 31 16:48:49 crc kubenswrapper[4730]: I0131 16:48:49.822982 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"409f06cc-0b07-4015-8dbf-0d25c902b15f","Type":"ContainerStarted","Data":"4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33"} Jan 31 16:48:49 crc kubenswrapper[4730]: I0131 16:48:49.823038 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="409f06cc-0b07-4015-8dbf-0d25c902b15f" containerName="nova-metadata-metadata" containerID="cri-o://4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33" gracePeriod=30 Jan 31 16:48:49 crc kubenswrapper[4730]: I0131 16:48:49.856214 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.185715394 podStartE2EDuration="7.856194s" podCreationTimestamp="2026-01-31 16:48:42 +0000 UTC" firstStartedPulling="2026-01-31 16:48:44.655503264 +0000 UTC m=+1111.461560170" lastFinishedPulling="2026-01-31 16:48:48.32598186 +0000 UTC m=+1115.132038776" observedRunningTime="2026-01-31 16:48:49.85330102 +0000 UTC m=+1116.659357966" watchObservedRunningTime="2026-01-31 16:48:49.856194 +0000 UTC m=+1116.662250916" Jan 31 16:48:49 crc kubenswrapper[4730]: I0131 16:48:49.875261 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.481823456 podStartE2EDuration="6.875243653s" podCreationTimestamp="2026-01-31 16:48:43 +0000 UTC" firstStartedPulling="2026-01-31 16:48:44.934909587 +0000 UTC m=+1111.740966503" lastFinishedPulling="2026-01-31 16:48:48.328329744 +0000 UTC m=+1115.134386700" observedRunningTime="2026-01-31 16:48:49.867647604 +0000 UTC m=+1116.673704520" watchObservedRunningTime="2026-01-31 16:48:49.875243653 +0000 UTC m=+1116.681300569" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.484736 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.625660 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/409f06cc-0b07-4015-8dbf-0d25c902b15f-config-data\") pod \"409f06cc-0b07-4015-8dbf-0d25c902b15f\" (UID: \"409f06cc-0b07-4015-8dbf-0d25c902b15f\") " Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.625720 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/409f06cc-0b07-4015-8dbf-0d25c902b15f-logs\") pod \"409f06cc-0b07-4015-8dbf-0d25c902b15f\" (UID: \"409f06cc-0b07-4015-8dbf-0d25c902b15f\") " Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.625891 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvsrj\" (UniqueName: \"kubernetes.io/projected/409f06cc-0b07-4015-8dbf-0d25c902b15f-kube-api-access-gvsrj\") pod \"409f06cc-0b07-4015-8dbf-0d25c902b15f\" (UID: \"409f06cc-0b07-4015-8dbf-0d25c902b15f\") " Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.625967 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409f06cc-0b07-4015-8dbf-0d25c902b15f-combined-ca-bundle\") pod \"409f06cc-0b07-4015-8dbf-0d25c902b15f\" (UID: \"409f06cc-0b07-4015-8dbf-0d25c902b15f\") " Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.626297 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/409f06cc-0b07-4015-8dbf-0d25c902b15f-logs" (OuterVolumeSpecName: "logs") pod "409f06cc-0b07-4015-8dbf-0d25c902b15f" (UID: "409f06cc-0b07-4015-8dbf-0d25c902b15f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.626628 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/409f06cc-0b07-4015-8dbf-0d25c902b15f-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.639100 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/409f06cc-0b07-4015-8dbf-0d25c902b15f-kube-api-access-gvsrj" (OuterVolumeSpecName: "kube-api-access-gvsrj") pod "409f06cc-0b07-4015-8dbf-0d25c902b15f" (UID: "409f06cc-0b07-4015-8dbf-0d25c902b15f"). InnerVolumeSpecName "kube-api-access-gvsrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.652749 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/409f06cc-0b07-4015-8dbf-0d25c902b15f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "409f06cc-0b07-4015-8dbf-0d25c902b15f" (UID: "409f06cc-0b07-4015-8dbf-0d25c902b15f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.672205 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/409f06cc-0b07-4015-8dbf-0d25c902b15f-config-data" (OuterVolumeSpecName: "config-data") pod "409f06cc-0b07-4015-8dbf-0d25c902b15f" (UID: "409f06cc-0b07-4015-8dbf-0d25c902b15f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.728093 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409f06cc-0b07-4015-8dbf-0d25c902b15f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.728548 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/409f06cc-0b07-4015-8dbf-0d25c902b15f-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.728607 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvsrj\" (UniqueName: \"kubernetes.io/projected/409f06cc-0b07-4015-8dbf-0d25c902b15f-kube-api-access-gvsrj\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.840679 4730 generic.go:334] "Generic (PLEG): container finished" podID="409f06cc-0b07-4015-8dbf-0d25c902b15f" containerID="4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33" exitCode=0 Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.840718 4730 generic.go:334] "Generic (PLEG): container finished" podID="409f06cc-0b07-4015-8dbf-0d25c902b15f" containerID="2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad" exitCode=143 Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.840876 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"409f06cc-0b07-4015-8dbf-0d25c902b15f","Type":"ContainerDied","Data":"4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33"} Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.840910 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"409f06cc-0b07-4015-8dbf-0d25c902b15f","Type":"ContainerDied","Data":"2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad"} Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.840919 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.840928 4730 scope.go:117] "RemoveContainer" containerID="4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.840919 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"409f06cc-0b07-4015-8dbf-0d25c902b15f","Type":"ContainerDied","Data":"6510dec68d33891ee9f73bce271a24a09b458219044daacdcde9b594877cc2e2"} Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.874613 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.878106 4730 scope.go:117] "RemoveContainer" containerID="2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.908139 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.942031 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:48:50 crc kubenswrapper[4730]: E0131 16:48:50.942380 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="409f06cc-0b07-4015-8dbf-0d25c902b15f" containerName="nova-metadata-log" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.942402 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="409f06cc-0b07-4015-8dbf-0d25c902b15f" containerName="nova-metadata-log" Jan 31 16:48:50 crc kubenswrapper[4730]: E0131 16:48:50.942440 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="409f06cc-0b07-4015-8dbf-0d25c902b15f" containerName="nova-metadata-metadata" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.942447 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="409f06cc-0b07-4015-8dbf-0d25c902b15f" containerName="nova-metadata-metadata" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.942616 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="409f06cc-0b07-4015-8dbf-0d25c902b15f" containerName="nova-metadata-metadata" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.942633 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="409f06cc-0b07-4015-8dbf-0d25c902b15f" containerName="nova-metadata-log" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.943604 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.946949 4730 scope.go:117] "RemoveContainer" containerID="4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.947182 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.947298 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 16:48:50 crc kubenswrapper[4730]: E0131 16:48:50.950962 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33\": container with ID starting with 4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33 not found: ID does not exist" containerID="4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.951015 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33"} err="failed to get container status \"4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33\": rpc error: code = NotFound desc = could not find container \"4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33\": container with ID starting with 4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33 not found: ID does not exist" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.951052 4730 scope.go:117] "RemoveContainer" containerID="2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad" Jan 31 16:48:50 crc kubenswrapper[4730]: E0131 16:48:50.954897 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad\": container with ID starting with 2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad not found: ID does not exist" containerID="2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.955008 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad"} err="failed to get container status \"2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad\": rpc error: code = NotFound desc = could not find container \"2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad\": container with ID starting with 2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad not found: ID does not exist" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.955089 4730 scope.go:117] "RemoveContainer" containerID="4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.955674 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33"} err="failed to get container status \"4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33\": rpc error: code = NotFound desc = could not find container \"4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33\": container with ID starting with 4cd03856db91ac5243be4afc7f9b89ea90947799fe8458a0190582968ff36c33 not found: ID does not exist" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.957565 4730 scope.go:117] "RemoveContainer" containerID="2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.958346 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad"} err="failed to get container status \"2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad\": rpc error: code = NotFound desc = could not find container \"2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad\": container with ID starting with 2b07c6262c6a262e42582a8081fba119ec607a79c06f0da98151b4d763c40dad not found: ID does not exist" Jan 31 16:48:50 crc kubenswrapper[4730]: I0131 16:48:50.959865 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.137242 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-config-data\") pod \"nova-metadata-0\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " pod="openstack/nova-metadata-0" Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.137616 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " pod="openstack/nova-metadata-0" Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.137658 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " pod="openstack/nova-metadata-0" Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.137737 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e58ad8e-757e-42a6-a6f3-fd573e185e50-logs\") pod \"nova-metadata-0\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " pod="openstack/nova-metadata-0" Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.137764 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx4xd\" (UniqueName: \"kubernetes.io/projected/3e58ad8e-757e-42a6-a6f3-fd573e185e50-kube-api-access-hx4xd\") pod \"nova-metadata-0\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " pod="openstack/nova-metadata-0" Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.239170 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " pod="openstack/nova-metadata-0" Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.239223 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " pod="openstack/nova-metadata-0" Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.239271 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e58ad8e-757e-42a6-a6f3-fd573e185e50-logs\") pod \"nova-metadata-0\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " pod="openstack/nova-metadata-0" Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.239299 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx4xd\" (UniqueName: \"kubernetes.io/projected/3e58ad8e-757e-42a6-a6f3-fd573e185e50-kube-api-access-hx4xd\") pod \"nova-metadata-0\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " pod="openstack/nova-metadata-0" Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.239381 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-config-data\") pod \"nova-metadata-0\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " pod="openstack/nova-metadata-0" Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.239869 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e58ad8e-757e-42a6-a6f3-fd573e185e50-logs\") pod \"nova-metadata-0\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " pod="openstack/nova-metadata-0" Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.243851 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-config-data\") pod \"nova-metadata-0\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " pod="openstack/nova-metadata-0" Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.244260 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " pod="openstack/nova-metadata-0" Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.245524 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " pod="openstack/nova-metadata-0" Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.256643 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx4xd\" (UniqueName: \"kubernetes.io/projected/3e58ad8e-757e-42a6-a6f3-fd573e185e50-kube-api-access-hx4xd\") pod \"nova-metadata-0\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " pod="openstack/nova-metadata-0" Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.269002 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.796711 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:48:51 crc kubenswrapper[4730]: I0131 16:48:51.852135 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e58ad8e-757e-42a6-a6f3-fd573e185e50","Type":"ContainerStarted","Data":"48409653cc75e9a1d8b2e47ac103fe992d7bb529bc8460b8ed1dc69124943f97"} Jan 31 16:48:52 crc kubenswrapper[4730]: I0131 16:48:52.474122 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="409f06cc-0b07-4015-8dbf-0d25c902b15f" path="/var/lib/kubelet/pods/409f06cc-0b07-4015-8dbf-0d25c902b15f/volumes" Jan 31 16:48:52 crc kubenswrapper[4730]: I0131 16:48:52.860399 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e58ad8e-757e-42a6-a6f3-fd573e185e50","Type":"ContainerStarted","Data":"b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250"} Jan 31 16:48:52 crc kubenswrapper[4730]: I0131 16:48:52.860627 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e58ad8e-757e-42a6-a6f3-fd573e185e50","Type":"ContainerStarted","Data":"b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86"} Jan 31 16:48:52 crc kubenswrapper[4730]: I0131 16:48:52.885930 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.885909479 podStartE2EDuration="2.885909479s" podCreationTimestamp="2026-01-31 16:48:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:48:52.87976136 +0000 UTC m=+1119.685818276" watchObservedRunningTime="2026-01-31 16:48:52.885909479 +0000 UTC m=+1119.691966415" Jan 31 16:48:53 crc kubenswrapper[4730]: I0131 16:48:53.158661 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 16:48:53 crc kubenswrapper[4730]: I0131 16:48:53.158871 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="7a3833dd-076f-425d-bcf2-05c52520be71" containerName="kube-state-metrics" containerID="cri-o://61c4a0d50c86d71c7603e6381f772dbd0eeff5a552970482fb6115b0c3bf213b" gracePeriod=30 Jan 31 16:48:53 crc kubenswrapper[4730]: I0131 16:48:53.465793 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 16:48:53 crc kubenswrapper[4730]: I0131 16:48:53.465862 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 16:48:53 crc kubenswrapper[4730]: I0131 16:48:53.479239 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:48:53 crc kubenswrapper[4730]: I0131 16:48:53.514823 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 16:48:53 crc kubenswrapper[4730]: I0131 16:48:53.515105 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 16:48:53 crc kubenswrapper[4730]: I0131 16:48:53.567781 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 16:48:53 crc kubenswrapper[4730]: I0131 16:48:53.587061 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:48:53 crc kubenswrapper[4730]: I0131 16:48:53.693223 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57fff66767-t7tcb"] Jan 31 16:48:53 crc kubenswrapper[4730]: I0131 16:48:53.698311 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57fff66767-t7tcb" podUID="9b9e6ee1-bfce-461b-a098-9444b2203023" containerName="dnsmasq-dns" containerID="cri-o://ca4662245748d806e44bdc442bff7af124f6f0f5ec88ecf612f80fa1b09a814f" gracePeriod=10 Jan 31 16:48:53 crc kubenswrapper[4730]: I0131 16:48:53.748049 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 16:48:53 crc kubenswrapper[4730]: I0131 16:48:53.806442 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t54gm\" (UniqueName: \"kubernetes.io/projected/7a3833dd-076f-425d-bcf2-05c52520be71-kube-api-access-t54gm\") pod \"7a3833dd-076f-425d-bcf2-05c52520be71\" (UID: \"7a3833dd-076f-425d-bcf2-05c52520be71\") " Jan 31 16:48:53 crc kubenswrapper[4730]: I0131 16:48:53.834115 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a3833dd-076f-425d-bcf2-05c52520be71-kube-api-access-t54gm" (OuterVolumeSpecName: "kube-api-access-t54gm") pod "7a3833dd-076f-425d-bcf2-05c52520be71" (UID: "7a3833dd-076f-425d-bcf2-05c52520be71"). InnerVolumeSpecName "kube-api-access-t54gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:48:53 crc kubenswrapper[4730]: I0131 16:48:53.866646 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57fff66767-t7tcb" podUID="9b9e6ee1-bfce-461b-a098-9444b2203023" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.169:5353: connect: connection refused" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:53.896218 4730 generic.go:334] "Generic (PLEG): container finished" podID="638775e1-f41e-4dd4-a0b3-0a77ceccd15b" containerID="ddf97d903b360d8d8e881549e0bc9e812fb3a927b2fd766ecb3ce83d053ebff4" exitCode=0 Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:53.896277 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hbl4w" event={"ID":"638775e1-f41e-4dd4-a0b3-0a77ceccd15b","Type":"ContainerDied","Data":"ddf97d903b360d8d8e881549e0bc9e812fb3a927b2fd766ecb3ce83d053ebff4"} Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:53.904074 4730 generic.go:334] "Generic (PLEG): container finished" podID="2b0bdf14-73a8-4d89-bdfe-b250d4b6a714" containerID="b412524c906028320ad3e4ff45adbedff39a1d3e9259b1050ba25cc562ed465e" exitCode=0 Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:53.904173 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-b9fwh" event={"ID":"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714","Type":"ContainerDied","Data":"b412524c906028320ad3e4ff45adbedff39a1d3e9259b1050ba25cc562ed465e"} Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:53.921515 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t54gm\" (UniqueName: \"kubernetes.io/projected/7a3833dd-076f-425d-bcf2-05c52520be71-kube-api-access-t54gm\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:53.922886 4730 generic.go:334] "Generic (PLEG): container finished" podID="7a3833dd-076f-425d-bcf2-05c52520be71" containerID="61c4a0d50c86d71c7603e6381f772dbd0eeff5a552970482fb6115b0c3bf213b" exitCode=2 Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:53.923522 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:53.923601 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7a3833dd-076f-425d-bcf2-05c52520be71","Type":"ContainerDied","Data":"61c4a0d50c86d71c7603e6381f772dbd0eeff5a552970482fb6115b0c3bf213b"} Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:53.923630 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7a3833dd-076f-425d-bcf2-05c52520be71","Type":"ContainerDied","Data":"0def8b0565011b24ff82ffe1441ff64a54478fdfbffa201bc16b28f7281857b2"} Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:53.923648 4730 scope.go:117] "RemoveContainer" containerID="61c4a0d50c86d71c7603e6381f772dbd0eeff5a552970482fb6115b0c3bf213b" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.066361 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.146783 4730 scope.go:117] "RemoveContainer" containerID="61c4a0d50c86d71c7603e6381f772dbd0eeff5a552970482fb6115b0c3bf213b" Jan 31 16:48:54 crc kubenswrapper[4730]: E0131 16:48:54.147325 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61c4a0d50c86d71c7603e6381f772dbd0eeff5a552970482fb6115b0c3bf213b\": container with ID starting with 61c4a0d50c86d71c7603e6381f772dbd0eeff5a552970482fb6115b0c3bf213b not found: ID does not exist" containerID="61c4a0d50c86d71c7603e6381f772dbd0eeff5a552970482fb6115b0c3bf213b" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.147431 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61c4a0d50c86d71c7603e6381f772dbd0eeff5a552970482fb6115b0c3bf213b"} err="failed to get container status \"61c4a0d50c86d71c7603e6381f772dbd0eeff5a552970482fb6115b0c3bf213b\": rpc error: code = NotFound desc = could not find container \"61c4a0d50c86d71c7603e6381f772dbd0eeff5a552970482fb6115b0c3bf213b\": container with ID starting with 61c4a0d50c86d71c7603e6381f772dbd0eeff5a552970482fb6115b0c3bf213b not found: ID does not exist" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.158075 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.171841 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.181947 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 16:48:54 crc kubenswrapper[4730]: E0131 16:48:54.182634 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a3833dd-076f-425d-bcf2-05c52520be71" containerName="kube-state-metrics" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.182656 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a3833dd-076f-425d-bcf2-05c52520be71" containerName="kube-state-metrics" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.194874 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a3833dd-076f-425d-bcf2-05c52520be71" containerName="kube-state-metrics" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.195485 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.198553 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.198818 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.204944 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.232955 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c494d989-7c60-42f1-91ee-625a507f93d6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c494d989-7c60-42f1-91ee-625a507f93d6\") " pod="openstack/kube-state-metrics-0" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.233290 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjlbf\" (UniqueName: \"kubernetes.io/projected/c494d989-7c60-42f1-91ee-625a507f93d6-kube-api-access-mjlbf\") pod \"kube-state-metrics-0\" (UID: \"c494d989-7c60-42f1-91ee-625a507f93d6\") " pod="openstack/kube-state-metrics-0" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.233353 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c494d989-7c60-42f1-91ee-625a507f93d6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c494d989-7c60-42f1-91ee-625a507f93d6\") " pod="openstack/kube-state-metrics-0" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.233478 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c494d989-7c60-42f1-91ee-625a507f93d6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c494d989-7c60-42f1-91ee-625a507f93d6\") " pod="openstack/kube-state-metrics-0" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.335163 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjlbf\" (UniqueName: \"kubernetes.io/projected/c494d989-7c60-42f1-91ee-625a507f93d6-kube-api-access-mjlbf\") pod \"kube-state-metrics-0\" (UID: \"c494d989-7c60-42f1-91ee-625a507f93d6\") " pod="openstack/kube-state-metrics-0" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.335223 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c494d989-7c60-42f1-91ee-625a507f93d6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c494d989-7c60-42f1-91ee-625a507f93d6\") " pod="openstack/kube-state-metrics-0" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.335295 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c494d989-7c60-42f1-91ee-625a507f93d6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c494d989-7c60-42f1-91ee-625a507f93d6\") " pod="openstack/kube-state-metrics-0" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.335326 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c494d989-7c60-42f1-91ee-625a507f93d6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c494d989-7c60-42f1-91ee-625a507f93d6\") " pod="openstack/kube-state-metrics-0" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.348495 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c494d989-7c60-42f1-91ee-625a507f93d6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c494d989-7c60-42f1-91ee-625a507f93d6\") " pod="openstack/kube-state-metrics-0" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.349088 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c494d989-7c60-42f1-91ee-625a507f93d6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c494d989-7c60-42f1-91ee-625a507f93d6\") " pod="openstack/kube-state-metrics-0" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.349660 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c494d989-7c60-42f1-91ee-625a507f93d6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c494d989-7c60-42f1-91ee-625a507f93d6\") " pod="openstack/kube-state-metrics-0" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.354411 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjlbf\" (UniqueName: \"kubernetes.io/projected/c494d989-7c60-42f1-91ee-625a507f93d6-kube-api-access-mjlbf\") pod \"kube-state-metrics-0\" (UID: \"c494d989-7c60-42f1-91ee-625a507f93d6\") " pod="openstack/kube-state-metrics-0" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.469273 4730 scope.go:117] "RemoveContainer" containerID="e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.469352 4730 scope.go:117] "RemoveContainer" containerID="fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.469466 4730 scope.go:117] "RemoveContainer" containerID="78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.476395 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a3833dd-076f-425d-bcf2-05c52520be71" path="/var/lib/kubelet/pods/7a3833dd-076f-425d-bcf2-05c52520be71/volumes" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.543315 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.583695 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="575160a7-8757-4da4-9eec-9cc6158c7d45" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.584149 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="575160a7-8757-4da4-9eec-9cc6158c7d45" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.938328 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.971853 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc"} Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.978355 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn86p\" (UniqueName: \"kubernetes.io/projected/9b9e6ee1-bfce-461b-a098-9444b2203023-kube-api-access-cn86p\") pod \"9b9e6ee1-bfce-461b-a098-9444b2203023\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.978563 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-ovsdbserver-sb\") pod \"9b9e6ee1-bfce-461b-a098-9444b2203023\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.978658 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-ovsdbserver-nb\") pod \"9b9e6ee1-bfce-461b-a098-9444b2203023\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.978725 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-config\") pod \"9b9e6ee1-bfce-461b-a098-9444b2203023\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.978752 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-dns-svc\") pod \"9b9e6ee1-bfce-461b-a098-9444b2203023\" (UID: \"9b9e6ee1-bfce-461b-a098-9444b2203023\") " Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.999298 4730 generic.go:334] "Generic (PLEG): container finished" podID="9b9e6ee1-bfce-461b-a098-9444b2203023" containerID="ca4662245748d806e44bdc442bff7af124f6f0f5ec88ecf612f80fa1b09a814f" exitCode=0 Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.999486 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fff66767-t7tcb" event={"ID":"9b9e6ee1-bfce-461b-a098-9444b2203023","Type":"ContainerDied","Data":"ca4662245748d806e44bdc442bff7af124f6f0f5ec88ecf612f80fa1b09a814f"} Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.999557 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fff66767-t7tcb" event={"ID":"9b9e6ee1-bfce-461b-a098-9444b2203023","Type":"ContainerDied","Data":"e40741959f74e084c4a846a196b54d55090cf2c3586382d3cd67e06cbac7ed32"} Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.999582 4730 scope.go:117] "RemoveContainer" containerID="ca4662245748d806e44bdc442bff7af124f6f0f5ec88ecf612f80fa1b09a814f" Jan 31 16:48:54 crc kubenswrapper[4730]: I0131 16:48:54.999855 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fff66767-t7tcb" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.018046 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9e6ee1-bfce-461b-a098-9444b2203023-kube-api-access-cn86p" (OuterVolumeSpecName: "kube-api-access-cn86p") pod "9b9e6ee1-bfce-461b-a098-9444b2203023" (UID: "9b9e6ee1-bfce-461b-a098-9444b2203023"). InnerVolumeSpecName "kube-api-access-cn86p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.069580 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9b9e6ee1-bfce-461b-a098-9444b2203023" (UID: "9b9e6ee1-bfce-461b-a098-9444b2203023"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.089543 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn86p\" (UniqueName: \"kubernetes.io/projected/9b9e6ee1-bfce-461b-a098-9444b2203023-kube-api-access-cn86p\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.089577 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.115289 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-config" (OuterVolumeSpecName: "config") pod "9b9e6ee1-bfce-461b-a098-9444b2203023" (UID: "9b9e6ee1-bfce-461b-a098-9444b2203023"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.116904 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9b9e6ee1-bfce-461b-a098-9444b2203023" (UID: "9b9e6ee1-bfce-461b-a098-9444b2203023"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.142664 4730 scope.go:117] "RemoveContainer" containerID="8856ad3e63483d5d17690f9e62ff5b4ab4e19d62586306096b1fe1d2b5cdccf5" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.143070 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b9e6ee1-bfce-461b-a098-9444b2203023" (UID: "9b9e6ee1-bfce-461b-a098-9444b2203023"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.184466 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.190924 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.190946 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.190957 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9e6ee1-bfce-461b-a098-9444b2203023-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.211911 4730 scope.go:117] "RemoveContainer" containerID="ca4662245748d806e44bdc442bff7af124f6f0f5ec88ecf612f80fa1b09a814f" Jan 31 16:48:55 crc kubenswrapper[4730]: E0131 16:48:55.213894 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca4662245748d806e44bdc442bff7af124f6f0f5ec88ecf612f80fa1b09a814f\": container with ID starting with ca4662245748d806e44bdc442bff7af124f6f0f5ec88ecf612f80fa1b09a814f not found: ID does not exist" containerID="ca4662245748d806e44bdc442bff7af124f6f0f5ec88ecf612f80fa1b09a814f" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.213924 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4662245748d806e44bdc442bff7af124f6f0f5ec88ecf612f80fa1b09a814f"} err="failed to get container status \"ca4662245748d806e44bdc442bff7af124f6f0f5ec88ecf612f80fa1b09a814f\": rpc error: code = NotFound desc = could not find container \"ca4662245748d806e44bdc442bff7af124f6f0f5ec88ecf612f80fa1b09a814f\": container with ID starting with ca4662245748d806e44bdc442bff7af124f6f0f5ec88ecf612f80fa1b09a814f not found: ID does not exist" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.213946 4730 scope.go:117] "RemoveContainer" containerID="8856ad3e63483d5d17690f9e62ff5b4ab4e19d62586306096b1fe1d2b5cdccf5" Jan 31 16:48:55 crc kubenswrapper[4730]: E0131 16:48:55.217494 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8856ad3e63483d5d17690f9e62ff5b4ab4e19d62586306096b1fe1d2b5cdccf5\": container with ID starting with 8856ad3e63483d5d17690f9e62ff5b4ab4e19d62586306096b1fe1d2b5cdccf5 not found: ID does not exist" containerID="8856ad3e63483d5d17690f9e62ff5b4ab4e19d62586306096b1fe1d2b5cdccf5" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.217530 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8856ad3e63483d5d17690f9e62ff5b4ab4e19d62586306096b1fe1d2b5cdccf5"} err="failed to get container status \"8856ad3e63483d5d17690f9e62ff5b4ab4e19d62586306096b1fe1d2b5cdccf5\": rpc error: code = NotFound desc = could not find container \"8856ad3e63483d5d17690f9e62ff5b4ab4e19d62586306096b1fe1d2b5cdccf5\": container with ID starting with 8856ad3e63483d5d17690f9e62ff5b4ab4e19d62586306096b1fe1d2b5cdccf5 not found: ID does not exist" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.365984 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57fff66767-t7tcb"] Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.380677 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57fff66767-t7tcb"] Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.489178 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-b9fwh" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.530792 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hbl4w" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.607627 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qspx\" (UniqueName: \"kubernetes.io/projected/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-kube-api-access-5qspx\") pod \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\" (UID: \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\") " Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.607745 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-combined-ca-bundle\") pod \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\" (UID: \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\") " Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.607856 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-config-data\") pod \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\" (UID: \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\") " Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.607916 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-scripts\") pod \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\" (UID: \"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714\") " Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.647301 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-kube-api-access-5qspx" (OuterVolumeSpecName: "kube-api-access-5qspx") pod "2b0bdf14-73a8-4d89-bdfe-b250d4b6a714" (UID: "2b0bdf14-73a8-4d89-bdfe-b250d4b6a714"). InnerVolumeSpecName "kube-api-access-5qspx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.655585 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-config-data" (OuterVolumeSpecName: "config-data") pod "2b0bdf14-73a8-4d89-bdfe-b250d4b6a714" (UID: "2b0bdf14-73a8-4d89-bdfe-b250d4b6a714"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.662465 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b0bdf14-73a8-4d89-bdfe-b250d4b6a714" (UID: "2b0bdf14-73a8-4d89-bdfe-b250d4b6a714"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.667994 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-scripts" (OuterVolumeSpecName: "scripts") pod "2b0bdf14-73a8-4d89-bdfe-b250d4b6a714" (UID: "2b0bdf14-73a8-4d89-bdfe-b250d4b6a714"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.720674 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5j5j\" (UniqueName: \"kubernetes.io/projected/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-kube-api-access-n5j5j\") pod \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\" (UID: \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\") " Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.720909 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-config-data\") pod \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\" (UID: \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\") " Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.720938 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-scripts\") pod \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\" (UID: \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\") " Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.720987 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-combined-ca-bundle\") pod \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\" (UID: \"638775e1-f41e-4dd4-a0b3-0a77ceccd15b\") " Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.722397 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.722420 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qspx\" (UniqueName: \"kubernetes.io/projected/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-kube-api-access-5qspx\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.722429 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.722438 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.725903 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-kube-api-access-n5j5j" (OuterVolumeSpecName: "kube-api-access-n5j5j") pod "638775e1-f41e-4dd4-a0b3-0a77ceccd15b" (UID: "638775e1-f41e-4dd4-a0b3-0a77ceccd15b"). InnerVolumeSpecName "kube-api-access-n5j5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.736245 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-scripts" (OuterVolumeSpecName: "scripts") pod "638775e1-f41e-4dd4-a0b3-0a77ceccd15b" (UID: "638775e1-f41e-4dd4-a0b3-0a77ceccd15b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.782213 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-config-data" (OuterVolumeSpecName: "config-data") pod "638775e1-f41e-4dd4-a0b3-0a77ceccd15b" (UID: "638775e1-f41e-4dd4-a0b3-0a77ceccd15b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.783679 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "638775e1-f41e-4dd4-a0b3-0a77ceccd15b" (UID: "638775e1-f41e-4dd4-a0b3-0a77ceccd15b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.824017 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.824050 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.824058 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:55 crc kubenswrapper[4730]: I0131 16:48:55.824071 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5j5j\" (UniqueName: \"kubernetes.io/projected/638775e1-f41e-4dd4-a0b3-0a77ceccd15b-kube-api-access-n5j5j\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.008888 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-b9fwh" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.008976 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-b9fwh" event={"ID":"2b0bdf14-73a8-4d89-bdfe-b250d4b6a714","Type":"ContainerDied","Data":"e954fb1e58e04fd87a8d97192bfcefceec3c6883233869e284d35b469862b1a2"} Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.009016 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e954fb1e58e04fd87a8d97192bfcefceec3c6883233869e284d35b469862b1a2" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.019007 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" exitCode=1 Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.019037 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" exitCode=1 Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.019077 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc"} Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.019102 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756"} Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.019119 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd"} Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.019135 4730 scope.go:117] "RemoveContainer" containerID="e30d3ed5c5b6c43eb9c2220613b0d782fac205782deb3af666e757a62a21800c" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.019857 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.019931 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:48:56 crc kubenswrapper[4730]: E0131 16:48:56.020321 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.024044 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c494d989-7c60-42f1-91ee-625a507f93d6","Type":"ContainerStarted","Data":"1c57a81119ea11c49b0894387973d9738243977ff4704dd379781c61d7c7898a"} Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.024097 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.024108 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c494d989-7c60-42f1-91ee-625a507f93d6","Type":"ContainerStarted","Data":"c93b171d7204c1d501e8435bef854bdc98e19331a897585e38a55998ef6520af"} Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.038473 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hbl4w" event={"ID":"638775e1-f41e-4dd4-a0b3-0a77ceccd15b","Type":"ContainerDied","Data":"e9d22c892eeea3925f669d6025ea6a2905965c25636e9a9d2cf6166783711d08"} Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.038507 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9d22c892eeea3925f669d6025ea6a2905965c25636e9a9d2cf6166783711d08" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.038572 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hbl4w" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.082132 4730 scope.go:117] "RemoveContainer" containerID="fc03d5326580ceb258731fd1dab9a7997f0a8647c6281dd26555dd688870ecc4" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.095370 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 16:48:56 crc kubenswrapper[4730]: E0131 16:48:56.095821 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9e6ee1-bfce-461b-a098-9444b2203023" containerName="dnsmasq-dns" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.095834 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9e6ee1-bfce-461b-a098-9444b2203023" containerName="dnsmasq-dns" Jan 31 16:48:56 crc kubenswrapper[4730]: E0131 16:48:56.095845 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9e6ee1-bfce-461b-a098-9444b2203023" containerName="init" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.095851 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9e6ee1-bfce-461b-a098-9444b2203023" containerName="init" Jan 31 16:48:56 crc kubenswrapper[4730]: E0131 16:48:56.095862 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0bdf14-73a8-4d89-bdfe-b250d4b6a714" containerName="nova-cell1-conductor-db-sync" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.095870 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0bdf14-73a8-4d89-bdfe-b250d4b6a714" containerName="nova-cell1-conductor-db-sync" Jan 31 16:48:56 crc kubenswrapper[4730]: E0131 16:48:56.095886 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638775e1-f41e-4dd4-a0b3-0a77ceccd15b" containerName="nova-manage" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.095892 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="638775e1-f41e-4dd4-a0b3-0a77ceccd15b" containerName="nova-manage" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.096056 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b0bdf14-73a8-4d89-bdfe-b250d4b6a714" containerName="nova-cell1-conductor-db-sync" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.096078 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9e6ee1-bfce-461b-a098-9444b2203023" containerName="dnsmasq-dns" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.096096 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="638775e1-f41e-4dd4-a0b3-0a77ceccd15b" containerName="nova-manage" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.096739 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.101003 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.110295 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.112069 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.668621376 podStartE2EDuration="2.112050123s" podCreationTimestamp="2026-01-31 16:48:54 +0000 UTC" firstStartedPulling="2026-01-31 16:48:55.204590394 +0000 UTC m=+1122.010647310" lastFinishedPulling="2026-01-31 16:48:55.648019141 +0000 UTC m=+1122.454076057" observedRunningTime="2026-01-31 16:48:56.109934835 +0000 UTC m=+1122.915991751" watchObservedRunningTime="2026-01-31 16:48:56.112050123 +0000 UTC m=+1122.918107039" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.246836 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn55t\" (UniqueName: \"kubernetes.io/projected/c261dee9-9004-49c9-be31-6571f30f8dbc-kube-api-access-cn55t\") pod \"nova-cell1-conductor-0\" (UID: \"c261dee9-9004-49c9-be31-6571f30f8dbc\") " pod="openstack/nova-cell1-conductor-0" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.247192 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c261dee9-9004-49c9-be31-6571f30f8dbc-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c261dee9-9004-49c9-be31-6571f30f8dbc\") " pod="openstack/nova-cell1-conductor-0" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.247275 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c261dee9-9004-49c9-be31-6571f30f8dbc-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c261dee9-9004-49c9-be31-6571f30f8dbc\") " pod="openstack/nova-cell1-conductor-0" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.269490 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.270438 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.287140 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.287383 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="575160a7-8757-4da4-9eec-9cc6158c7d45" containerName="nova-api-log" containerID="cri-o://f24805d4da432fbdef8e92f6ea7b99fa76f42a43e84512cda9eb3c37de5d161f" gracePeriod=30 Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.287444 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="575160a7-8757-4da4-9eec-9cc6158c7d45" containerName="nova-api-api" containerID="cri-o://f3f0e2838d17b0ac2f3cf99645602bacbc2a34dbce680b6b2404a5d86cee155b" gracePeriod=30 Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.296776 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.296989 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="de660e39-bb4a-4e40-bcd8-d87354323cc4" containerName="nova-scheduler-scheduler" containerID="cri-o://240797cf5de075f5e97ebd19ae55537902c7110b3b4d240da8a68db5460a2c9e" gracePeriod=30 Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.325605 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.349355 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c261dee9-9004-49c9-be31-6571f30f8dbc-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c261dee9-9004-49c9-be31-6571f30f8dbc\") " pod="openstack/nova-cell1-conductor-0" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.349609 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c261dee9-9004-49c9-be31-6571f30f8dbc-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c261dee9-9004-49c9-be31-6571f30f8dbc\") " pod="openstack/nova-cell1-conductor-0" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.349748 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn55t\" (UniqueName: \"kubernetes.io/projected/c261dee9-9004-49c9-be31-6571f30f8dbc-kube-api-access-cn55t\") pod \"nova-cell1-conductor-0\" (UID: \"c261dee9-9004-49c9-be31-6571f30f8dbc\") " pod="openstack/nova-cell1-conductor-0" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.353448 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c261dee9-9004-49c9-be31-6571f30f8dbc-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c261dee9-9004-49c9-be31-6571f30f8dbc\") " pod="openstack/nova-cell1-conductor-0" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.354392 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c261dee9-9004-49c9-be31-6571f30f8dbc-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c261dee9-9004-49c9-be31-6571f30f8dbc\") " pod="openstack/nova-cell1-conductor-0" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.366507 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn55t\" (UniqueName: \"kubernetes.io/projected/c261dee9-9004-49c9-be31-6571f30f8dbc-kube-api-access-cn55t\") pod \"nova-cell1-conductor-0\" (UID: \"c261dee9-9004-49c9-be31-6571f30f8dbc\") " pod="openstack/nova-cell1-conductor-0" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.412863 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.479009 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9e6ee1-bfce-461b-a098-9444b2203023" path="/var/lib/kubelet/pods/9b9e6ee1-bfce-461b-a098-9444b2203023/volumes" Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.479834 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.480094 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerName="ceilometer-central-agent" containerID="cri-o://af6c2addfd8a45667a9f7dec408961ad59708b07b760f89ab3fd8a66674094d7" gracePeriod=30 Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.480201 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerName="ceilometer-notification-agent" containerID="cri-o://176a361dfa4a240834ee6db556899e14c49f4fd8c287263515bbe327d4487e0b" gracePeriod=30 Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.480313 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerName="proxy-httpd" containerID="cri-o://335018c9652145b9a88d9342c4aec5b12feef1a64455c9d82e3a0cda51df3409" gracePeriod=30 Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.480371 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerName="sg-core" containerID="cri-o://ed878ce61ac9c7ded2ee20ea2134950c58807baf5fb8db9b7e2a4ffac2478d11" gracePeriod=30 Jan 31 16:48:56 crc kubenswrapper[4730]: W0131 16:48:56.925735 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc261dee9_9004_49c9_be31_6571f30f8dbc.slice/crio-9d449b60a4b936df5e804696b4008c0425df22cb617529a30b52b613103a2cef WatchSource:0}: Error finding container 9d449b60a4b936df5e804696b4008c0425df22cb617529a30b52b613103a2cef: Status 404 returned error can't find the container with id 9d449b60a4b936df5e804696b4008c0425df22cb617529a30b52b613103a2cef Jan 31 16:48:56 crc kubenswrapper[4730]: I0131 16:48:56.930914 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 16:48:57 crc kubenswrapper[4730]: I0131 16:48:57.050274 4730 generic.go:334] "Generic (PLEG): container finished" podID="575160a7-8757-4da4-9eec-9cc6158c7d45" containerID="f24805d4da432fbdef8e92f6ea7b99fa76f42a43e84512cda9eb3c37de5d161f" exitCode=143 Jan 31 16:48:57 crc kubenswrapper[4730]: I0131 16:48:57.050328 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"575160a7-8757-4da4-9eec-9cc6158c7d45","Type":"ContainerDied","Data":"f24805d4da432fbdef8e92f6ea7b99fa76f42a43e84512cda9eb3c37de5d161f"} Jan 31 16:48:57 crc kubenswrapper[4730]: I0131 16:48:57.066887 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" exitCode=1 Jan 31 16:48:57 crc kubenswrapper[4730]: I0131 16:48:57.066923 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756"} Jan 31 16:48:57 crc kubenswrapper[4730]: I0131 16:48:57.066974 4730 scope.go:117] "RemoveContainer" containerID="78180cc6fda456fe64c8357202f1958e48c11bee0b3864758c3c3d667278f9ff" Jan 31 16:48:57 crc kubenswrapper[4730]: I0131 16:48:57.067660 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:48:57 crc kubenswrapper[4730]: I0131 16:48:57.067741 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:48:57 crc kubenswrapper[4730]: I0131 16:48:57.067918 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:48:57 crc kubenswrapper[4730]: E0131 16:48:57.068416 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:48:57 crc kubenswrapper[4730]: I0131 16:48:57.072742 4730 generic.go:334] "Generic (PLEG): container finished" podID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerID="335018c9652145b9a88d9342c4aec5b12feef1a64455c9d82e3a0cda51df3409" exitCode=0 Jan 31 16:48:57 crc kubenswrapper[4730]: I0131 16:48:57.072771 4730 generic.go:334] "Generic (PLEG): container finished" podID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerID="ed878ce61ac9c7ded2ee20ea2134950c58807baf5fb8db9b7e2a4ffac2478d11" exitCode=2 Jan 31 16:48:57 crc kubenswrapper[4730]: I0131 16:48:57.072779 4730 generic.go:334] "Generic (PLEG): container finished" podID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerID="af6c2addfd8a45667a9f7dec408961ad59708b07b760f89ab3fd8a66674094d7" exitCode=0 Jan 31 16:48:57 crc kubenswrapper[4730]: I0131 16:48:57.072821 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4abc3572-660b-4c33-ac87-9cb6593a92a4","Type":"ContainerDied","Data":"335018c9652145b9a88d9342c4aec5b12feef1a64455c9d82e3a0cda51df3409"} Jan 31 16:48:57 crc kubenswrapper[4730]: I0131 16:48:57.072858 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4abc3572-660b-4c33-ac87-9cb6593a92a4","Type":"ContainerDied","Data":"ed878ce61ac9c7ded2ee20ea2134950c58807baf5fb8db9b7e2a4ffac2478d11"} Jan 31 16:48:57 crc kubenswrapper[4730]: I0131 16:48:57.072870 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4abc3572-660b-4c33-ac87-9cb6593a92a4","Type":"ContainerDied","Data":"af6c2addfd8a45667a9f7dec408961ad59708b07b760f89ab3fd8a66674094d7"} Jan 31 16:48:57 crc kubenswrapper[4730]: I0131 16:48:57.074107 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c261dee9-9004-49c9-be31-6571f30f8dbc","Type":"ContainerStarted","Data":"9d449b60a4b936df5e804696b4008c0425df22cb617529a30b52b613103a2cef"} Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.088705 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.089488 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.089629 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.089743 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c261dee9-9004-49c9-be31-6571f30f8dbc","Type":"ContainerStarted","Data":"4c81752dd22d1957e416fe050fc0aac83cb01cd466501547fce757a5e42c47e1"} Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.089853 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 31 16:48:58 crc kubenswrapper[4730]: E0131 16:48:58.090084 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.090432 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3e58ad8e-757e-42a6-a6f3-fd573e185e50" containerName="nova-metadata-metadata" containerID="cri-o://b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250" gracePeriod=30 Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.090562 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3e58ad8e-757e-42a6-a6f3-fd573e185e50" containerName="nova-metadata-log" containerID="cri-o://b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86" gracePeriod=30 Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.140682 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.1406673019999998 podStartE2EDuration="2.140667302s" podCreationTimestamp="2026-01-31 16:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:48:58.136191969 +0000 UTC m=+1124.942248885" watchObservedRunningTime="2026-01-31 16:48:58.140667302 +0000 UTC m=+1124.946724218" Jan 31 16:48:58 crc kubenswrapper[4730]: E0131 16:48:58.534992 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="240797cf5de075f5e97ebd19ae55537902c7110b3b4d240da8a68db5460a2c9e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 16:48:58 crc kubenswrapper[4730]: E0131 16:48:58.536177 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="240797cf5de075f5e97ebd19ae55537902c7110b3b4d240da8a68db5460a2c9e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 16:48:58 crc kubenswrapper[4730]: E0131 16:48:58.537113 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="240797cf5de075f5e97ebd19ae55537902c7110b3b4d240da8a68db5460a2c9e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 16:48:58 crc kubenswrapper[4730]: E0131 16:48:58.537144 4730 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="de660e39-bb4a-4e40-bcd8-d87354323cc4" containerName="nova-scheduler-scheduler" Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.673873 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.803224 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-config-data\") pod \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.803286 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-nova-metadata-tls-certs\") pod \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.803490 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx4xd\" (UniqueName: \"kubernetes.io/projected/3e58ad8e-757e-42a6-a6f3-fd573e185e50-kube-api-access-hx4xd\") pod \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.803590 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-combined-ca-bundle\") pod \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.803634 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e58ad8e-757e-42a6-a6f3-fd573e185e50-logs\") pod \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\" (UID: \"3e58ad8e-757e-42a6-a6f3-fd573e185e50\") " Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.804039 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e58ad8e-757e-42a6-a6f3-fd573e185e50-logs" (OuterVolumeSpecName: "logs") pod "3e58ad8e-757e-42a6-a6f3-fd573e185e50" (UID: "3e58ad8e-757e-42a6-a6f3-fd573e185e50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.804522 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e58ad8e-757e-42a6-a6f3-fd573e185e50-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.809539 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e58ad8e-757e-42a6-a6f3-fd573e185e50-kube-api-access-hx4xd" (OuterVolumeSpecName: "kube-api-access-hx4xd") pod "3e58ad8e-757e-42a6-a6f3-fd573e185e50" (UID: "3e58ad8e-757e-42a6-a6f3-fd573e185e50"). InnerVolumeSpecName "kube-api-access-hx4xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.833086 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-config-data" (OuterVolumeSpecName: "config-data") pod "3e58ad8e-757e-42a6-a6f3-fd573e185e50" (UID: "3e58ad8e-757e-42a6-a6f3-fd573e185e50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.860333 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e58ad8e-757e-42a6-a6f3-fd573e185e50" (UID: "3e58ad8e-757e-42a6-a6f3-fd573e185e50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.880401 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3e58ad8e-757e-42a6-a6f3-fd573e185e50" (UID: "3e58ad8e-757e-42a6-a6f3-fd573e185e50"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.907375 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx4xd\" (UniqueName: \"kubernetes.io/projected/3e58ad8e-757e-42a6-a6f3-fd573e185e50-kube-api-access-hx4xd\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.907401 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.907412 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:58 crc kubenswrapper[4730]: I0131 16:48:58.907421 4730 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e58ad8e-757e-42a6-a6f3-fd573e185e50-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.099836 4730 generic.go:334] "Generic (PLEG): container finished" podID="3e58ad8e-757e-42a6-a6f3-fd573e185e50" containerID="b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250" exitCode=0 Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.099874 4730 generic.go:334] "Generic (PLEG): container finished" podID="3e58ad8e-757e-42a6-a6f3-fd573e185e50" containerID="b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86" exitCode=143 Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.099919 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.099917 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e58ad8e-757e-42a6-a6f3-fd573e185e50","Type":"ContainerDied","Data":"b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250"} Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.100091 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e58ad8e-757e-42a6-a6f3-fd573e185e50","Type":"ContainerDied","Data":"b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86"} Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.100104 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e58ad8e-757e-42a6-a6f3-fd573e185e50","Type":"ContainerDied","Data":"48409653cc75e9a1d8b2e47ac103fe992d7bb529bc8460b8ed1dc69124943f97"} Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.100118 4730 scope.go:117] "RemoveContainer" containerID="b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.128971 4730 scope.go:117] "RemoveContainer" containerID="b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.134374 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.156058 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.161670 4730 scope.go:117] "RemoveContainer" containerID="b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250" Jan 31 16:48:59 crc kubenswrapper[4730]: E0131 16:48:59.162163 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250\": container with ID starting with b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250 not found: ID does not exist" containerID="b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.162194 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250"} err="failed to get container status \"b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250\": rpc error: code = NotFound desc = could not find container \"b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250\": container with ID starting with b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250 not found: ID does not exist" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.162213 4730 scope.go:117] "RemoveContainer" containerID="b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86" Jan 31 16:48:59 crc kubenswrapper[4730]: E0131 16:48:59.162361 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86\": container with ID starting with b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86 not found: ID does not exist" containerID="b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.162379 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86"} err="failed to get container status \"b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86\": rpc error: code = NotFound desc = could not find container \"b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86\": container with ID starting with b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86 not found: ID does not exist" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.162390 4730 scope.go:117] "RemoveContainer" containerID="b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.162554 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250"} err="failed to get container status \"b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250\": rpc error: code = NotFound desc = could not find container \"b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250\": container with ID starting with b76bc4755eb4d40420e14e49fd7c7072d49cfad5e0649db8595b95d8917c7250 not found: ID does not exist" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.162577 4730 scope.go:117] "RemoveContainer" containerID="b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.162720 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86"} err="failed to get container status \"b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86\": rpc error: code = NotFound desc = could not find container \"b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86\": container with ID starting with b041bcb69ad91cc6782937a77c21ca703375b57c2dfa019ce59def0d390dbd86 not found: ID does not exist" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.174307 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:48:59 crc kubenswrapper[4730]: E0131 16:48:59.174674 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e58ad8e-757e-42a6-a6f3-fd573e185e50" containerName="nova-metadata-log" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.174689 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e58ad8e-757e-42a6-a6f3-fd573e185e50" containerName="nova-metadata-log" Jan 31 16:48:59 crc kubenswrapper[4730]: E0131 16:48:59.174719 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e58ad8e-757e-42a6-a6f3-fd573e185e50" containerName="nova-metadata-metadata" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.174726 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e58ad8e-757e-42a6-a6f3-fd573e185e50" containerName="nova-metadata-metadata" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.177135 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e58ad8e-757e-42a6-a6f3-fd573e185e50" containerName="nova-metadata-log" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.177155 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e58ad8e-757e-42a6-a6f3-fd573e185e50" containerName="nova-metadata-metadata" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.178023 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.179598 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.179766 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.187525 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.314936 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44259ad5-956e-4e78-8564-238063ce2747-logs\") pod \"nova-metadata-0\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.315043 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.315089 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.315131 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swwp4\" (UniqueName: \"kubernetes.io/projected/44259ad5-956e-4e78-8564-238063ce2747-kube-api-access-swwp4\") pod \"nova-metadata-0\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.315177 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-config-data\") pod \"nova-metadata-0\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.416740 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44259ad5-956e-4e78-8564-238063ce2747-logs\") pod \"nova-metadata-0\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.416864 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.416916 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.416959 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swwp4\" (UniqueName: \"kubernetes.io/projected/44259ad5-956e-4e78-8564-238063ce2747-kube-api-access-swwp4\") pod \"nova-metadata-0\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.417003 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-config-data\") pod \"nova-metadata-0\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.417345 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44259ad5-956e-4e78-8564-238063ce2747-logs\") pod \"nova-metadata-0\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.424269 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.426191 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-config-data\") pod \"nova-metadata-0\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.430550 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.433013 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swwp4\" (UniqueName: \"kubernetes.io/projected/44259ad5-956e-4e78-8564-238063ce2747-kube-api-access-swwp4\") pod \"nova-metadata-0\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.529763 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 16:48:59 crc kubenswrapper[4730]: I0131 16:48:59.976661 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:49:00 crc kubenswrapper[4730]: I0131 16:49:00.108939 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44259ad5-956e-4e78-8564-238063ce2747","Type":"ContainerStarted","Data":"d7c73100a070b3a62bf07da70300013e1666706000dcc129e4eaefd5a7a11f40"} Jan 31 16:49:00 crc kubenswrapper[4730]: I0131 16:49:00.476006 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e58ad8e-757e-42a6-a6f3-fd573e185e50" path="/var/lib/kubelet/pods/3e58ad8e-757e-42a6-a6f3-fd573e185e50/volumes" Jan 31 16:49:00 crc kubenswrapper[4730]: I0131 16:49:00.679932 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 16:49:00 crc kubenswrapper[4730]: I0131 16:49:00.843620 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc7vr\" (UniqueName: \"kubernetes.io/projected/de660e39-bb4a-4e40-bcd8-d87354323cc4-kube-api-access-dc7vr\") pod \"de660e39-bb4a-4e40-bcd8-d87354323cc4\" (UID: \"de660e39-bb4a-4e40-bcd8-d87354323cc4\") " Jan 31 16:49:00 crc kubenswrapper[4730]: I0131 16:49:00.843707 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de660e39-bb4a-4e40-bcd8-d87354323cc4-combined-ca-bundle\") pod \"de660e39-bb4a-4e40-bcd8-d87354323cc4\" (UID: \"de660e39-bb4a-4e40-bcd8-d87354323cc4\") " Jan 31 16:49:00 crc kubenswrapper[4730]: I0131 16:49:00.843828 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de660e39-bb4a-4e40-bcd8-d87354323cc4-config-data\") pod \"de660e39-bb4a-4e40-bcd8-d87354323cc4\" (UID: \"de660e39-bb4a-4e40-bcd8-d87354323cc4\") " Jan 31 16:49:00 crc kubenswrapper[4730]: I0131 16:49:00.872626 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de660e39-bb4a-4e40-bcd8-d87354323cc4-kube-api-access-dc7vr" (OuterVolumeSpecName: "kube-api-access-dc7vr") pod "de660e39-bb4a-4e40-bcd8-d87354323cc4" (UID: "de660e39-bb4a-4e40-bcd8-d87354323cc4"). InnerVolumeSpecName "kube-api-access-dc7vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:49:00 crc kubenswrapper[4730]: I0131 16:49:00.874760 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de660e39-bb4a-4e40-bcd8-d87354323cc4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de660e39-bb4a-4e40-bcd8-d87354323cc4" (UID: "de660e39-bb4a-4e40-bcd8-d87354323cc4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:00 crc kubenswrapper[4730]: I0131 16:49:00.889501 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de660e39-bb4a-4e40-bcd8-d87354323cc4-config-data" (OuterVolumeSpecName: "config-data") pod "de660e39-bb4a-4e40-bcd8-d87354323cc4" (UID: "de660e39-bb4a-4e40-bcd8-d87354323cc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:00 crc kubenswrapper[4730]: I0131 16:49:00.946999 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de660e39-bb4a-4e40-bcd8-d87354323cc4-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:00 crc kubenswrapper[4730]: I0131 16:49:00.947026 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc7vr\" (UniqueName: \"kubernetes.io/projected/de660e39-bb4a-4e40-bcd8-d87354323cc4-kube-api-access-dc7vr\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:00 crc kubenswrapper[4730]: I0131 16:49:00.947038 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de660e39-bb4a-4e40-bcd8-d87354323cc4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.114136 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.119177 4730 generic.go:334] "Generic (PLEG): container finished" podID="575160a7-8757-4da4-9eec-9cc6158c7d45" containerID="f3f0e2838d17b0ac2f3cf99645602bacbc2a34dbce680b6b2404a5d86cee155b" exitCode=0 Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.119258 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.119397 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"575160a7-8757-4da4-9eec-9cc6158c7d45","Type":"ContainerDied","Data":"f3f0e2838d17b0ac2f3cf99645602bacbc2a34dbce680b6b2404a5d86cee155b"} Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.119442 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"575160a7-8757-4da4-9eec-9cc6158c7d45","Type":"ContainerDied","Data":"56f94d89a8db9acdf4421388df801e65f2e88ca3b564625014d21b47e0d5e0b5"} Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.119459 4730 scope.go:117] "RemoveContainer" containerID="f3f0e2838d17b0ac2f3cf99645602bacbc2a34dbce680b6b2404a5d86cee155b" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.124277 4730 generic.go:334] "Generic (PLEG): container finished" podID="de660e39-bb4a-4e40-bcd8-d87354323cc4" containerID="240797cf5de075f5e97ebd19ae55537902c7110b3b4d240da8a68db5460a2c9e" exitCode=0 Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.124383 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"de660e39-bb4a-4e40-bcd8-d87354323cc4","Type":"ContainerDied","Data":"240797cf5de075f5e97ebd19ae55537902c7110b3b4d240da8a68db5460a2c9e"} Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.124410 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"de660e39-bb4a-4e40-bcd8-d87354323cc4","Type":"ContainerDied","Data":"8e0553fcf7ccf4017fafe7038eea4eb0e4de7d2c7c4af9fbd75449c566c5e2a3"} Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.124461 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.129495 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44259ad5-956e-4e78-8564-238063ce2747","Type":"ContainerStarted","Data":"4c7724b37a010451d6528b1a892ccda05d0a8a04c76ded9b741679f0c6a14caf"} Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.129531 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44259ad5-956e-4e78-8564-238063ce2747","Type":"ContainerStarted","Data":"d1ec40b4b1eefd9124c5cafaad268776776156d11f748798e62100418aec2bb7"} Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.145474 4730 scope.go:117] "RemoveContainer" containerID="f24805d4da432fbdef8e92f6ea7b99fa76f42a43e84512cda9eb3c37de5d161f" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.173954 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.176706 4730 scope.go:117] "RemoveContainer" containerID="f3f0e2838d17b0ac2f3cf99645602bacbc2a34dbce680b6b2404a5d86cee155b" Jan 31 16:49:01 crc kubenswrapper[4730]: E0131 16:49:01.178029 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3f0e2838d17b0ac2f3cf99645602bacbc2a34dbce680b6b2404a5d86cee155b\": container with ID starting with f3f0e2838d17b0ac2f3cf99645602bacbc2a34dbce680b6b2404a5d86cee155b not found: ID does not exist" containerID="f3f0e2838d17b0ac2f3cf99645602bacbc2a34dbce680b6b2404a5d86cee155b" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.178071 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3f0e2838d17b0ac2f3cf99645602bacbc2a34dbce680b6b2404a5d86cee155b"} err="failed to get container status \"f3f0e2838d17b0ac2f3cf99645602bacbc2a34dbce680b6b2404a5d86cee155b\": rpc error: code = NotFound desc = could not find container \"f3f0e2838d17b0ac2f3cf99645602bacbc2a34dbce680b6b2404a5d86cee155b\": container with ID starting with f3f0e2838d17b0ac2f3cf99645602bacbc2a34dbce680b6b2404a5d86cee155b not found: ID does not exist" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.178097 4730 scope.go:117] "RemoveContainer" containerID="f24805d4da432fbdef8e92f6ea7b99fa76f42a43e84512cda9eb3c37de5d161f" Jan 31 16:49:01 crc kubenswrapper[4730]: E0131 16:49:01.179635 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f24805d4da432fbdef8e92f6ea7b99fa76f42a43e84512cda9eb3c37de5d161f\": container with ID starting with f24805d4da432fbdef8e92f6ea7b99fa76f42a43e84512cda9eb3c37de5d161f not found: ID does not exist" containerID="f24805d4da432fbdef8e92f6ea7b99fa76f42a43e84512cda9eb3c37de5d161f" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.179667 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f24805d4da432fbdef8e92f6ea7b99fa76f42a43e84512cda9eb3c37de5d161f"} err="failed to get container status \"f24805d4da432fbdef8e92f6ea7b99fa76f42a43e84512cda9eb3c37de5d161f\": rpc error: code = NotFound desc = could not find container \"f24805d4da432fbdef8e92f6ea7b99fa76f42a43e84512cda9eb3c37de5d161f\": container with ID starting with f24805d4da432fbdef8e92f6ea7b99fa76f42a43e84512cda9eb3c37de5d161f not found: ID does not exist" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.179689 4730 scope.go:117] "RemoveContainer" containerID="240797cf5de075f5e97ebd19ae55537902c7110b3b4d240da8a68db5460a2c9e" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.182881 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.208623 4730 scope.go:117] "RemoveContainer" containerID="240797cf5de075f5e97ebd19ae55537902c7110b3b4d240da8a68db5460a2c9e" Jan 31 16:49:01 crc kubenswrapper[4730]: E0131 16:49:01.209066 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"240797cf5de075f5e97ebd19ae55537902c7110b3b4d240da8a68db5460a2c9e\": container with ID starting with 240797cf5de075f5e97ebd19ae55537902c7110b3b4d240da8a68db5460a2c9e not found: ID does not exist" containerID="240797cf5de075f5e97ebd19ae55537902c7110b3b4d240da8a68db5460a2c9e" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.209100 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"240797cf5de075f5e97ebd19ae55537902c7110b3b4d240da8a68db5460a2c9e"} err="failed to get container status \"240797cf5de075f5e97ebd19ae55537902c7110b3b4d240da8a68db5460a2c9e\": rpc error: code = NotFound desc = could not find container \"240797cf5de075f5e97ebd19ae55537902c7110b3b4d240da8a68db5460a2c9e\": container with ID starting with 240797cf5de075f5e97ebd19ae55537902c7110b3b4d240da8a68db5460a2c9e not found: ID does not exist" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.213265 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.213248449 podStartE2EDuration="2.213248449s" podCreationTimestamp="2026-01-31 16:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:49:01.208854929 +0000 UTC m=+1128.014911865" watchObservedRunningTime="2026-01-31 16:49:01.213248449 +0000 UTC m=+1128.019305375" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.214856 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 16:49:01 crc kubenswrapper[4730]: E0131 16:49:01.215272 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575160a7-8757-4da4-9eec-9cc6158c7d45" containerName="nova-api-log" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.215283 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="575160a7-8757-4da4-9eec-9cc6158c7d45" containerName="nova-api-log" Jan 31 16:49:01 crc kubenswrapper[4730]: E0131 16:49:01.215301 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de660e39-bb4a-4e40-bcd8-d87354323cc4" containerName="nova-scheduler-scheduler" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.215307 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="de660e39-bb4a-4e40-bcd8-d87354323cc4" containerName="nova-scheduler-scheduler" Jan 31 16:49:01 crc kubenswrapper[4730]: E0131 16:49:01.215338 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575160a7-8757-4da4-9eec-9cc6158c7d45" containerName="nova-api-api" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.215345 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="575160a7-8757-4da4-9eec-9cc6158c7d45" containerName="nova-api-api" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.215555 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="de660e39-bb4a-4e40-bcd8-d87354323cc4" containerName="nova-scheduler-scheduler" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.215572 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="575160a7-8757-4da4-9eec-9cc6158c7d45" containerName="nova-api-api" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.215582 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="575160a7-8757-4da4-9eec-9cc6158c7d45" containerName="nova-api-log" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.216565 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.218770 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.230953 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.253489 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zqfb\" (UniqueName: \"kubernetes.io/projected/575160a7-8757-4da4-9eec-9cc6158c7d45-kube-api-access-5zqfb\") pod \"575160a7-8757-4da4-9eec-9cc6158c7d45\" (UID: \"575160a7-8757-4da4-9eec-9cc6158c7d45\") " Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.253618 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/575160a7-8757-4da4-9eec-9cc6158c7d45-logs\") pod \"575160a7-8757-4da4-9eec-9cc6158c7d45\" (UID: \"575160a7-8757-4da4-9eec-9cc6158c7d45\") " Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.253646 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/575160a7-8757-4da4-9eec-9cc6158c7d45-combined-ca-bundle\") pod \"575160a7-8757-4da4-9eec-9cc6158c7d45\" (UID: \"575160a7-8757-4da4-9eec-9cc6158c7d45\") " Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.253677 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/575160a7-8757-4da4-9eec-9cc6158c7d45-config-data\") pod \"575160a7-8757-4da4-9eec-9cc6158c7d45\" (UID: \"575160a7-8757-4da4-9eec-9cc6158c7d45\") " Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.254254 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/575160a7-8757-4da4-9eec-9cc6158c7d45-logs" (OuterVolumeSpecName: "logs") pod "575160a7-8757-4da4-9eec-9cc6158c7d45" (UID: "575160a7-8757-4da4-9eec-9cc6158c7d45"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.257323 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/575160a7-8757-4da4-9eec-9cc6158c7d45-kube-api-access-5zqfb" (OuterVolumeSpecName: "kube-api-access-5zqfb") pod "575160a7-8757-4da4-9eec-9cc6158c7d45" (UID: "575160a7-8757-4da4-9eec-9cc6158c7d45"). InnerVolumeSpecName "kube-api-access-5zqfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.277746 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/575160a7-8757-4da4-9eec-9cc6158c7d45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "575160a7-8757-4da4-9eec-9cc6158c7d45" (UID: "575160a7-8757-4da4-9eec-9cc6158c7d45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.280338 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/575160a7-8757-4da4-9eec-9cc6158c7d45-config-data" (OuterVolumeSpecName: "config-data") pod "575160a7-8757-4da4-9eec-9cc6158c7d45" (UID: "575160a7-8757-4da4-9eec-9cc6158c7d45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.355711 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d46a81f-3e6a-4035-869e-db235995f42e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8d46a81f-3e6a-4035-869e-db235995f42e\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.356496 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6jj7\" (UniqueName: \"kubernetes.io/projected/8d46a81f-3e6a-4035-869e-db235995f42e-kube-api-access-z6jj7\") pod \"nova-scheduler-0\" (UID: \"8d46a81f-3e6a-4035-869e-db235995f42e\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.356719 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d46a81f-3e6a-4035-869e-db235995f42e-config-data\") pod \"nova-scheduler-0\" (UID: \"8d46a81f-3e6a-4035-869e-db235995f42e\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.356881 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/575160a7-8757-4da4-9eec-9cc6158c7d45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.356978 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/575160a7-8757-4da4-9eec-9cc6158c7d45-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.357055 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zqfb\" (UniqueName: \"kubernetes.io/projected/575160a7-8757-4da4-9eec-9cc6158c7d45-kube-api-access-5zqfb\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.357195 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/575160a7-8757-4da4-9eec-9cc6158c7d45-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.458846 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d46a81f-3e6a-4035-869e-db235995f42e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8d46a81f-3e6a-4035-869e-db235995f42e\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.458896 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6jj7\" (UniqueName: \"kubernetes.io/projected/8d46a81f-3e6a-4035-869e-db235995f42e-kube-api-access-z6jj7\") pod \"nova-scheduler-0\" (UID: \"8d46a81f-3e6a-4035-869e-db235995f42e\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.458981 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d46a81f-3e6a-4035-869e-db235995f42e-config-data\") pod \"nova-scheduler-0\" (UID: \"8d46a81f-3e6a-4035-869e-db235995f42e\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.464052 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d46a81f-3e6a-4035-869e-db235995f42e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8d46a81f-3e6a-4035-869e-db235995f42e\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.467585 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d46a81f-3e6a-4035-869e-db235995f42e-config-data\") pod \"nova-scheduler-0\" (UID: \"8d46a81f-3e6a-4035-869e-db235995f42e\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.481233 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6jj7\" (UniqueName: \"kubernetes.io/projected/8d46a81f-3e6a-4035-869e-db235995f42e-kube-api-access-z6jj7\") pod \"nova-scheduler-0\" (UID: \"8d46a81f-3e6a-4035-869e-db235995f42e\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.501151 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.514913 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.535938 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.537488 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.538371 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.540540 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.541872 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.662733 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-config-data\") pod \"nova-api-0\" (UID: \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\") " pod="openstack/nova-api-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.663096 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\") " pod="openstack/nova-api-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.663127 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-logs\") pod \"nova-api-0\" (UID: \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\") " pod="openstack/nova-api-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.663160 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsddc\" (UniqueName: \"kubernetes.io/projected/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-kube-api-access-gsddc\") pod \"nova-api-0\" (UID: \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\") " pod="openstack/nova-api-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.764173 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-config-data\") pod \"nova-api-0\" (UID: \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\") " pod="openstack/nova-api-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.764216 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\") " pod="openstack/nova-api-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.764241 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-logs\") pod \"nova-api-0\" (UID: \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\") " pod="openstack/nova-api-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.764276 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsddc\" (UniqueName: \"kubernetes.io/projected/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-kube-api-access-gsddc\") pod \"nova-api-0\" (UID: \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\") " pod="openstack/nova-api-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.764841 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-logs\") pod \"nova-api-0\" (UID: \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\") " pod="openstack/nova-api-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.769372 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-config-data\") pod \"nova-api-0\" (UID: \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\") " pod="openstack/nova-api-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.782382 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\") " pod="openstack/nova-api-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.782387 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsddc\" (UniqueName: \"kubernetes.io/projected/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-kube-api-access-gsddc\") pod \"nova-api-0\" (UID: \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\") " pod="openstack/nova-api-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.879474 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 16:49:01 crc kubenswrapper[4730]: I0131 16:49:01.983625 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 16:49:01 crc kubenswrapper[4730]: W0131 16:49:01.993697 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d46a81f_3e6a_4035_869e_db235995f42e.slice/crio-a4e532cbd178d78804aacc6b700359664185487313dd34d8ded2f15e25edd2b1 WatchSource:0}: Error finding container a4e532cbd178d78804aacc6b700359664185487313dd34d8ded2f15e25edd2b1: Status 404 returned error can't find the container with id a4e532cbd178d78804aacc6b700359664185487313dd34d8ded2f15e25edd2b1 Jan 31 16:49:02 crc kubenswrapper[4730]: I0131 16:49:02.147715 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8d46a81f-3e6a-4035-869e-db235995f42e","Type":"ContainerStarted","Data":"a4e532cbd178d78804aacc6b700359664185487313dd34d8ded2f15e25edd2b1"} Jan 31 16:49:02 crc kubenswrapper[4730]: I0131 16:49:02.341197 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 16:49:02 crc kubenswrapper[4730]: W0131 16:49:02.345858 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaab83ba1_24e1_48aa_bf77_8444ed3cc8b5.slice/crio-0b9841f74a584795ad0f25a7e87792a7f100ae4f9f6c3fc6657d11b2f1f2add8 WatchSource:0}: Error finding container 0b9841f74a584795ad0f25a7e87792a7f100ae4f9f6c3fc6657d11b2f1f2add8: Status 404 returned error can't find the container with id 0b9841f74a584795ad0f25a7e87792a7f100ae4f9f6c3fc6657d11b2f1f2add8 Jan 31 16:49:02 crc kubenswrapper[4730]: I0131 16:49:02.464905 4730 scope.go:117] "RemoveContainer" containerID="7c82501473fe44da233cb5f731a3ef4645d054a7dd345473f9e244a5bc551d74" Jan 31 16:49:02 crc kubenswrapper[4730]: I0131 16:49:02.464931 4730 scope.go:117] "RemoveContainer" containerID="84d40ecfafd585df45a30308ba3f8ff4f5ec4e8a5fb29b9578ee7d0795ac3414" Jan 31 16:49:02 crc kubenswrapper[4730]: I0131 16:49:02.475888 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="575160a7-8757-4da4-9eec-9cc6158c7d45" path="/var/lib/kubelet/pods/575160a7-8757-4da4-9eec-9cc6158c7d45/volumes" Jan 31 16:49:02 crc kubenswrapper[4730]: I0131 16:49:02.476448 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de660e39-bb4a-4e40-bcd8-d87354323cc4" path="/var/lib/kubelet/pods/de660e39-bb4a-4e40-bcd8-d87354323cc4/volumes" Jan 31 16:49:02 crc kubenswrapper[4730]: E0131 16:49:02.605316 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.161430 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5","Type":"ContainerStarted","Data":"565645dae3ccc4243eec63f6d5d0a436bcd8cdfe9ffb4921ae96ce497ee5f470"} Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.161470 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5","Type":"ContainerStarted","Data":"e8dbb3e34439801b3364ebd12b63fd76317cfe65d4079b82ebc04e7636a72e30"} Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.161480 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5","Type":"ContainerStarted","Data":"0b9841f74a584795ad0f25a7e87792a7f100ae4f9f6c3fc6657d11b2f1f2add8"} Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.165667 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"cfa1bb05f3641c975fee4a59cca6327c8fe5928e7e78248e3be9568518a9568f"} Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.166115 4730 scope.go:117] "RemoveContainer" containerID="84d40ecfafd585df45a30308ba3f8ff4f5ec4e8a5fb29b9578ee7d0795ac3414" Jan 31 16:49:03 crc kubenswrapper[4730]: E0131 16:49:03.166440 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.166238 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.168357 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8d46a81f-3e6a-4035-869e-db235995f42e","Type":"ContainerStarted","Data":"af98163e7d9109addbd7edb7aa3afdfdd8c921301fb40d7a7055faeeaa3f19b7"} Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.197181 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.19716142 podStartE2EDuration="2.19716142s" podCreationTimestamp="2026-01-31 16:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:49:03.183631648 +0000 UTC m=+1129.989688554" watchObservedRunningTime="2026-01-31 16:49:03.19716142 +0000 UTC m=+1130.003218336" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.204032 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.204011618 podStartE2EDuration="2.204011618s" podCreationTimestamp="2026-01-31 16:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:49:03.203240287 +0000 UTC m=+1130.009297223" watchObservedRunningTime="2026-01-31 16:49:03.204011618 +0000 UTC m=+1130.010068544" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.563330 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.610968 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4abc3572-660b-4c33-ac87-9cb6593a92a4-run-httpd\") pod \"4abc3572-660b-4c33-ac87-9cb6593a92a4\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.611068 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-combined-ca-bundle\") pod \"4abc3572-660b-4c33-ac87-9cb6593a92a4\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.611098 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75ngk\" (UniqueName: \"kubernetes.io/projected/4abc3572-660b-4c33-ac87-9cb6593a92a4-kube-api-access-75ngk\") pod \"4abc3572-660b-4c33-ac87-9cb6593a92a4\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.611158 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-config-data\") pod \"4abc3572-660b-4c33-ac87-9cb6593a92a4\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.611179 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-scripts\") pod \"4abc3572-660b-4c33-ac87-9cb6593a92a4\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.611250 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-sg-core-conf-yaml\") pod \"4abc3572-660b-4c33-ac87-9cb6593a92a4\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.611267 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4abc3572-660b-4c33-ac87-9cb6593a92a4-log-httpd\") pod \"4abc3572-660b-4c33-ac87-9cb6593a92a4\" (UID: \"4abc3572-660b-4c33-ac87-9cb6593a92a4\") " Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.611716 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4abc3572-660b-4c33-ac87-9cb6593a92a4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4abc3572-660b-4c33-ac87-9cb6593a92a4" (UID: "4abc3572-660b-4c33-ac87-9cb6593a92a4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.611879 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4abc3572-660b-4c33-ac87-9cb6593a92a4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4abc3572-660b-4c33-ac87-9cb6593a92a4" (UID: "4abc3572-660b-4c33-ac87-9cb6593a92a4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.626001 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-scripts" (OuterVolumeSpecName: "scripts") pod "4abc3572-660b-4c33-ac87-9cb6593a92a4" (UID: "4abc3572-660b-4c33-ac87-9cb6593a92a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.627072 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4abc3572-660b-4c33-ac87-9cb6593a92a4-kube-api-access-75ngk" (OuterVolumeSpecName: "kube-api-access-75ngk") pod "4abc3572-660b-4c33-ac87-9cb6593a92a4" (UID: "4abc3572-660b-4c33-ac87-9cb6593a92a4"). InnerVolumeSpecName "kube-api-access-75ngk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.687915 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4abc3572-660b-4c33-ac87-9cb6593a92a4" (UID: "4abc3572-660b-4c33-ac87-9cb6593a92a4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.708975 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4abc3572-660b-4c33-ac87-9cb6593a92a4" (UID: "4abc3572-660b-4c33-ac87-9cb6593a92a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.719062 4730 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.719092 4730 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4abc3572-660b-4c33-ac87-9cb6593a92a4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.719100 4730 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4abc3572-660b-4c33-ac87-9cb6593a92a4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.719111 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.719121 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75ngk\" (UniqueName: \"kubernetes.io/projected/4abc3572-660b-4c33-ac87-9cb6593a92a4-kube-api-access-75ngk\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.719149 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.752903 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-config-data" (OuterVolumeSpecName: "config-data") pod "4abc3572-660b-4c33-ac87-9cb6593a92a4" (UID: "4abc3572-660b-4c33-ac87-9cb6593a92a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:03 crc kubenswrapper[4730]: I0131 16:49:03.821360 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abc3572-660b-4c33-ac87-9cb6593a92a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.183461 4730 generic.go:334] "Generic (PLEG): container finished" podID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerID="176a361dfa4a240834ee6db556899e14c49f4fd8c287263515bbe327d4487e0b" exitCode=0 Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.183531 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.183516 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4abc3572-660b-4c33-ac87-9cb6593a92a4","Type":"ContainerDied","Data":"176a361dfa4a240834ee6db556899e14c49f4fd8c287263515bbe327d4487e0b"} Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.184918 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4abc3572-660b-4c33-ac87-9cb6593a92a4","Type":"ContainerDied","Data":"77ae6c6307d94660a2447e76cc71bec3047518f9c3702e02f86140165bb701d8"} Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.184978 4730 scope.go:117] "RemoveContainer" containerID="335018c9652145b9a88d9342c4aec5b12feef1a64455c9d82e3a0cda51df3409" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.185980 4730 scope.go:117] "RemoveContainer" containerID="84d40ecfafd585df45a30308ba3f8ff4f5ec4e8a5fb29b9578ee7d0795ac3414" Jan 31 16:49:04 crc kubenswrapper[4730]: E0131 16:49:04.186209 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.213450 4730 scope.go:117] "RemoveContainer" containerID="ed878ce61ac9c7ded2ee20ea2134950c58807baf5fb8db9b7e2a4ffac2478d11" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.245725 4730 scope.go:117] "RemoveContainer" containerID="176a361dfa4a240834ee6db556899e14c49f4fd8c287263515bbe327d4487e0b" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.253629 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.270974 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.275597 4730 scope.go:117] "RemoveContainer" containerID="af6c2addfd8a45667a9f7dec408961ad59708b07b760f89ab3fd8a66674094d7" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.284624 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:49:04 crc kubenswrapper[4730]: E0131 16:49:04.285051 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerName="ceilometer-central-agent" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.285071 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerName="ceilometer-central-agent" Jan 31 16:49:04 crc kubenswrapper[4730]: E0131 16:49:04.285094 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerName="ceilometer-notification-agent" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.285106 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerName="ceilometer-notification-agent" Jan 31 16:49:04 crc kubenswrapper[4730]: E0131 16:49:04.285132 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerName="proxy-httpd" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.285140 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerName="proxy-httpd" Jan 31 16:49:04 crc kubenswrapper[4730]: E0131 16:49:04.285170 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerName="sg-core" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.285179 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerName="sg-core" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.285404 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerName="ceilometer-central-agent" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.285431 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerName="proxy-httpd" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.285447 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerName="ceilometer-notification-agent" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.285464 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="4abc3572-660b-4c33-ac87-9cb6593a92a4" containerName="sg-core" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.289711 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.293748 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.294082 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.294227 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.300626 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.304069 4730 scope.go:117] "RemoveContainer" containerID="335018c9652145b9a88d9342c4aec5b12feef1a64455c9d82e3a0cda51df3409" Jan 31 16:49:04 crc kubenswrapper[4730]: E0131 16:49:04.304520 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"335018c9652145b9a88d9342c4aec5b12feef1a64455c9d82e3a0cda51df3409\": container with ID starting with 335018c9652145b9a88d9342c4aec5b12feef1a64455c9d82e3a0cda51df3409 not found: ID does not exist" containerID="335018c9652145b9a88d9342c4aec5b12feef1a64455c9d82e3a0cda51df3409" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.304550 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"335018c9652145b9a88d9342c4aec5b12feef1a64455c9d82e3a0cda51df3409"} err="failed to get container status \"335018c9652145b9a88d9342c4aec5b12feef1a64455c9d82e3a0cda51df3409\": rpc error: code = NotFound desc = could not find container \"335018c9652145b9a88d9342c4aec5b12feef1a64455c9d82e3a0cda51df3409\": container with ID starting with 335018c9652145b9a88d9342c4aec5b12feef1a64455c9d82e3a0cda51df3409 not found: ID does not exist" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.304570 4730 scope.go:117] "RemoveContainer" containerID="ed878ce61ac9c7ded2ee20ea2134950c58807baf5fb8db9b7e2a4ffac2478d11" Jan 31 16:49:04 crc kubenswrapper[4730]: E0131 16:49:04.304993 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed878ce61ac9c7ded2ee20ea2134950c58807baf5fb8db9b7e2a4ffac2478d11\": container with ID starting with ed878ce61ac9c7ded2ee20ea2134950c58807baf5fb8db9b7e2a4ffac2478d11 not found: ID does not exist" containerID="ed878ce61ac9c7ded2ee20ea2134950c58807baf5fb8db9b7e2a4ffac2478d11" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.305039 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed878ce61ac9c7ded2ee20ea2134950c58807baf5fb8db9b7e2a4ffac2478d11"} err="failed to get container status \"ed878ce61ac9c7ded2ee20ea2134950c58807baf5fb8db9b7e2a4ffac2478d11\": rpc error: code = NotFound desc = could not find container \"ed878ce61ac9c7ded2ee20ea2134950c58807baf5fb8db9b7e2a4ffac2478d11\": container with ID starting with ed878ce61ac9c7ded2ee20ea2134950c58807baf5fb8db9b7e2a4ffac2478d11 not found: ID does not exist" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.305072 4730 scope.go:117] "RemoveContainer" containerID="176a361dfa4a240834ee6db556899e14c49f4fd8c287263515bbe327d4487e0b" Jan 31 16:49:04 crc kubenswrapper[4730]: E0131 16:49:04.305347 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"176a361dfa4a240834ee6db556899e14c49f4fd8c287263515bbe327d4487e0b\": container with ID starting with 176a361dfa4a240834ee6db556899e14c49f4fd8c287263515bbe327d4487e0b not found: ID does not exist" containerID="176a361dfa4a240834ee6db556899e14c49f4fd8c287263515bbe327d4487e0b" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.305375 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"176a361dfa4a240834ee6db556899e14c49f4fd8c287263515bbe327d4487e0b"} err="failed to get container status \"176a361dfa4a240834ee6db556899e14c49f4fd8c287263515bbe327d4487e0b\": rpc error: code = NotFound desc = could not find container \"176a361dfa4a240834ee6db556899e14c49f4fd8c287263515bbe327d4487e0b\": container with ID starting with 176a361dfa4a240834ee6db556899e14c49f4fd8c287263515bbe327d4487e0b not found: ID does not exist" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.305392 4730 scope.go:117] "RemoveContainer" containerID="af6c2addfd8a45667a9f7dec408961ad59708b07b760f89ab3fd8a66674094d7" Jan 31 16:49:04 crc kubenswrapper[4730]: E0131 16:49:04.305833 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af6c2addfd8a45667a9f7dec408961ad59708b07b760f89ab3fd8a66674094d7\": container with ID starting with af6c2addfd8a45667a9f7dec408961ad59708b07b760f89ab3fd8a66674094d7 not found: ID does not exist" containerID="af6c2addfd8a45667a9f7dec408961ad59708b07b760f89ab3fd8a66674094d7" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.305857 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af6c2addfd8a45667a9f7dec408961ad59708b07b760f89ab3fd8a66674094d7"} err="failed to get container status \"af6c2addfd8a45667a9f7dec408961ad59708b07b760f89ab3fd8a66674094d7\": rpc error: code = NotFound desc = could not find container \"af6c2addfd8a45667a9f7dec408961ad59708b07b760f89ab3fd8a66674094d7\": container with ID starting with af6c2addfd8a45667a9f7dec408961ad59708b07b760f89ab3fd8a66674094d7 not found: ID does not exist" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.433588 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.433630 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.433658 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpfhb\" (UniqueName: \"kubernetes.io/projected/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-kube-api-access-wpfhb\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.433697 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-log-httpd\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.433762 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-scripts\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.433784 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.433889 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-run-httpd\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.433931 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-config-data\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.496019 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4abc3572-660b-4c33-ac87-9cb6593a92a4" path="/var/lib/kubelet/pods/4abc3572-660b-4c33-ac87-9cb6593a92a4/volumes" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.530585 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.530645 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.534749 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-config-data\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.534792 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.534824 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.534851 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpfhb\" (UniqueName: \"kubernetes.io/projected/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-kube-api-access-wpfhb\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.534884 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-log-httpd\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.534921 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-scripts\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.534943 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.535055 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-run-httpd\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.535427 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-run-httpd\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.536483 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-log-httpd\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.539784 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.540344 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-config-data\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.541011 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-scripts\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.546360 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.547466 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.554507 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpfhb\" (UniqueName: \"kubernetes.io/projected/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-kube-api-access-wpfhb\") pod \"ceilometer-0\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " pod="openstack/ceilometer-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.567054 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 16:49:04 crc kubenswrapper[4730]: I0131 16:49:04.616096 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:49:05 crc kubenswrapper[4730]: I0131 16:49:05.088407 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:49:05 crc kubenswrapper[4730]: I0131 16:49:05.212683 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d18f2663-e551-4458-a5bf-3fa7c8caeaf3","Type":"ContainerStarted","Data":"399009e2c67ba035c6189e2592a508016774ee1eb66eb843dbff132331396074"} Jan 31 16:49:06 crc kubenswrapper[4730]: I0131 16:49:06.241950 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d18f2663-e551-4458-a5bf-3fa7c8caeaf3","Type":"ContainerStarted","Data":"d37a9aeff855967f773a34e1e9f64bacc38510e922981068c24d9af016c7d244"} Jan 31 16:49:06 crc kubenswrapper[4730]: I0131 16:49:06.439385 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 31 16:49:06 crc kubenswrapper[4730]: I0131 16:49:06.539846 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 16:49:07 crc kubenswrapper[4730]: I0131 16:49:07.252154 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d18f2663-e551-4458-a5bf-3fa7c8caeaf3","Type":"ContainerStarted","Data":"4c2ebf57942e4aad1a5af86ffbc37ea53f8b34923aa7fe2664fbd6a3f60a706d"} Jan 31 16:49:07 crc kubenswrapper[4730]: I0131 16:49:07.252435 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d18f2663-e551-4458-a5bf-3fa7c8caeaf3","Type":"ContainerStarted","Data":"bdf743837e255d1601d61593bc07a40dc6cc231db3fc239dad2155f7390c715f"} Jan 31 16:49:09 crc kubenswrapper[4730]: I0131 16:49:09.530469 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 16:49:09 crc kubenswrapper[4730]: I0131 16:49:09.532198 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 16:49:09 crc kubenswrapper[4730]: I0131 16:49:09.660830 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:49:10 crc kubenswrapper[4730]: I0131 16:49:10.288941 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d18f2663-e551-4458-a5bf-3fa7c8caeaf3","Type":"ContainerStarted","Data":"cc4c05e6b4d8610771ec684acdf6c3693072c1a9855f2a04e00ad38408eb5f13"} Jan 31 16:49:10 crc kubenswrapper[4730]: I0131 16:49:10.324232 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.023184886 podStartE2EDuration="6.324214338s" podCreationTimestamp="2026-01-31 16:49:04 +0000 UTC" firstStartedPulling="2026-01-31 16:49:05.092086008 +0000 UTC m=+1131.898142944" lastFinishedPulling="2026-01-31 16:49:09.39311547 +0000 UTC m=+1136.199172396" observedRunningTime="2026-01-31 16:49:10.323635892 +0000 UTC m=+1137.129692818" watchObservedRunningTime="2026-01-31 16:49:10.324214338 +0000 UTC m=+1137.130271274" Jan 31 16:49:10 crc kubenswrapper[4730]: I0131 16:49:10.464894 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:49:10 crc kubenswrapper[4730]: I0131 16:49:10.465264 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:49:10 crc kubenswrapper[4730]: I0131 16:49:10.465381 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:49:10 crc kubenswrapper[4730]: E0131 16:49:10.465859 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:49:10 crc kubenswrapper[4730]: I0131 16:49:10.543994 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="44259ad5-956e-4e78-8564-238063ce2747" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 16:49:10 crc kubenswrapper[4730]: I0131 16:49:10.544012 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="44259ad5-956e-4e78-8564-238063ce2747" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 16:49:10 crc kubenswrapper[4730]: I0131 16:49:10.659257 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:49:11 crc kubenswrapper[4730]: I0131 16:49:11.309051 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 16:49:11 crc kubenswrapper[4730]: I0131 16:49:11.539750 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 16:49:11 crc kubenswrapper[4730]: I0131 16:49:11.566399 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 16:49:11 crc kubenswrapper[4730]: E0131 16:49:11.643730 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 16:49:11 crc kubenswrapper[4730]: I0131 16:49:11.880220 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 16:49:11 crc kubenswrapper[4730]: I0131 16:49:11.880264 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 16:49:12 crc kubenswrapper[4730]: I0131 16:49:12.317733 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:49:12 crc kubenswrapper[4730]: I0131 16:49:12.402044 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 16:49:12 crc kubenswrapper[4730]: I0131 16:49:12.661275 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:49:12 crc kubenswrapper[4730]: I0131 16:49:12.962976 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aab83ba1-24e1-48aa-bf77-8444ed3cc8b5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.203:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 16:49:12 crc kubenswrapper[4730]: I0131 16:49:12.963042 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aab83ba1-24e1-48aa-bf77-8444ed3cc8b5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.203:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 16:49:14 crc kubenswrapper[4730]: I0131 16:49:14.971564 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:49:14 crc kubenswrapper[4730]: E0131 16:49:14.971886 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 16:49:14 crc kubenswrapper[4730]: E0131 16:49:14.972038 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 16:51:16.972007153 +0000 UTC m=+1263.778064109 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 16:49:15 crc kubenswrapper[4730]: I0131 16:49:15.465460 4730 scope.go:117] "RemoveContainer" containerID="84d40ecfafd585df45a30308ba3f8ff4f5ec4e8a5fb29b9578ee7d0795ac3414" Jan 31 16:49:15 crc kubenswrapper[4730]: I0131 16:49:15.469599 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:49:15 crc kubenswrapper[4730]: I0131 16:49:15.656617 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:49:15 crc kubenswrapper[4730]: I0131 16:49:15.656674 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:49:15 crc kubenswrapper[4730]: I0131 16:49:15.656984 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:49:16 crc kubenswrapper[4730]: I0131 16:49:16.357663 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"565c7bd9106aad9d86ce94e5f961be95c0e35c7214bd841b8cd05f550145a58b"} Jan 31 16:49:16 crc kubenswrapper[4730]: I0131 16:49:16.358188 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:49:16 crc kubenswrapper[4730]: I0131 16:49:16.358678 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"cfa1bb05f3641c975fee4a59cca6327c8fe5928e7e78248e3be9568518a9568f"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 16:49:16 crc kubenswrapper[4730]: I0131 16:49:16.358780 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://cfa1bb05f3641c975fee4a59cca6327c8fe5928e7e78248e3be9568518a9568f" gracePeriod=30 Jan 31 16:49:16 crc kubenswrapper[4730]: I0131 16:49:16.365372 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:49:17 crc kubenswrapper[4730]: I0131 16:49:17.371918 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="565c7bd9106aad9d86ce94e5f961be95c0e35c7214bd841b8cd05f550145a58b" exitCode=1 Jan 31 16:49:17 crc kubenswrapper[4730]: I0131 16:49:17.372140 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="cfa1bb05f3641c975fee4a59cca6327c8fe5928e7e78248e3be9568518a9568f" exitCode=0 Jan 31 16:49:17 crc kubenswrapper[4730]: I0131 16:49:17.371980 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"565c7bd9106aad9d86ce94e5f961be95c0e35c7214bd841b8cd05f550145a58b"} Jan 31 16:49:17 crc kubenswrapper[4730]: I0131 16:49:17.372189 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"cfa1bb05f3641c975fee4a59cca6327c8fe5928e7e78248e3be9568518a9568f"} Jan 31 16:49:17 crc kubenswrapper[4730]: I0131 16:49:17.372204 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"525d48ecc22cafce03d5c202b93966613fb4e59536345f3299c3c3aec9effd0f"} Jan 31 16:49:17 crc kubenswrapper[4730]: I0131 16:49:17.372220 4730 scope.go:117] "RemoveContainer" containerID="84d40ecfafd585df45a30308ba3f8ff4f5ec4e8a5fb29b9578ee7d0795ac3414" Jan 31 16:49:17 crc kubenswrapper[4730]: I0131 16:49:17.372352 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:49:17 crc kubenswrapper[4730]: I0131 16:49:17.372987 4730 scope.go:117] "RemoveContainer" containerID="565c7bd9106aad9d86ce94e5f961be95c0e35c7214bd841b8cd05f550145a58b" Jan 31 16:49:17 crc kubenswrapper[4730]: E0131 16:49:17.373379 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:49:17 crc kubenswrapper[4730]: I0131 16:49:17.434565 4730 scope.go:117] "RemoveContainer" containerID="7c82501473fe44da233cb5f731a3ef4645d054a7dd345473f9e244a5bc551d74" Jan 31 16:49:18 crc kubenswrapper[4730]: I0131 16:49:18.389906 4730 scope.go:117] "RemoveContainer" containerID="565c7bd9106aad9d86ce94e5f961be95c0e35c7214bd841b8cd05f550145a58b" Jan 31 16:49:18 crc kubenswrapper[4730]: E0131 16:49:18.391218 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:49:18 crc kubenswrapper[4730]: I0131 16:49:18.653516 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.319558 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.360967 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x846z\" (UniqueName: \"kubernetes.io/projected/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-kube-api-access-x846z\") pod \"dd25f1e4-9703-430d-96e1-9dc82dbcde4b\" (UID: \"dd25f1e4-9703-430d-96e1-9dc82dbcde4b\") " Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.361055 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-config-data\") pod \"dd25f1e4-9703-430d-96e1-9dc82dbcde4b\" (UID: \"dd25f1e4-9703-430d-96e1-9dc82dbcde4b\") " Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.361077 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-combined-ca-bundle\") pod \"dd25f1e4-9703-430d-96e1-9dc82dbcde4b\" (UID: \"dd25f1e4-9703-430d-96e1-9dc82dbcde4b\") " Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.370269 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-kube-api-access-x846z" (OuterVolumeSpecName: "kube-api-access-x846z") pod "dd25f1e4-9703-430d-96e1-9dc82dbcde4b" (UID: "dd25f1e4-9703-430d-96e1-9dc82dbcde4b"). InnerVolumeSpecName "kube-api-access-x846z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.393299 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-config-data" (OuterVolumeSpecName: "config-data") pod "dd25f1e4-9703-430d-96e1-9dc82dbcde4b" (UID: "dd25f1e4-9703-430d-96e1-9dc82dbcde4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.398720 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd25f1e4-9703-430d-96e1-9dc82dbcde4b" (UID: "dd25f1e4-9703-430d-96e1-9dc82dbcde4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.418625 4730 generic.go:334] "Generic (PLEG): container finished" podID="dd25f1e4-9703-430d-96e1-9dc82dbcde4b" containerID="9df3494dbc8ab0d2849a426c10a448e7321e328b43b324a3041627db3a43b0c0" exitCode=137 Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.418834 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.419337 4730 scope.go:117] "RemoveContainer" containerID="565c7bd9106aad9d86ce94e5f961be95c0e35c7214bd841b8cd05f550145a58b" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.419436 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dd25f1e4-9703-430d-96e1-9dc82dbcde4b","Type":"ContainerDied","Data":"9df3494dbc8ab0d2849a426c10a448e7321e328b43b324a3041627db3a43b0c0"} Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.419521 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dd25f1e4-9703-430d-96e1-9dc82dbcde4b","Type":"ContainerDied","Data":"1b77a561646cc427ef480dc3dc10e712cf10d1c89e68c543d6282d02d0c31893"} Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.419588 4730 scope.go:117] "RemoveContainer" containerID="9df3494dbc8ab0d2849a426c10a448e7321e328b43b324a3041627db3a43b0c0" Jan 31 16:49:19 crc kubenswrapper[4730]: E0131 16:49:19.419660 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.462473 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x846z\" (UniqueName: \"kubernetes.io/projected/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-kube-api-access-x846z\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.462835 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.462846 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd25f1e4-9703-430d-96e1-9dc82dbcde4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.481478 4730 scope.go:117] "RemoveContainer" containerID="9df3494dbc8ab0d2849a426c10a448e7321e328b43b324a3041627db3a43b0c0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.481628 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 16:49:19 crc kubenswrapper[4730]: E0131 16:49:19.484513 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9df3494dbc8ab0d2849a426c10a448e7321e328b43b324a3041627db3a43b0c0\": container with ID starting with 9df3494dbc8ab0d2849a426c10a448e7321e328b43b324a3041627db3a43b0c0 not found: ID does not exist" containerID="9df3494dbc8ab0d2849a426c10a448e7321e328b43b324a3041627db3a43b0c0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.484567 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9df3494dbc8ab0d2849a426c10a448e7321e328b43b324a3041627db3a43b0c0"} err="failed to get container status \"9df3494dbc8ab0d2849a426c10a448e7321e328b43b324a3041627db3a43b0c0\": rpc error: code = NotFound desc = could not find container \"9df3494dbc8ab0d2849a426c10a448e7321e328b43b324a3041627db3a43b0c0\": container with ID starting with 9df3494dbc8ab0d2849a426c10a448e7321e328b43b324a3041627db3a43b0c0 not found: ID does not exist" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.494465 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.504745 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 16:49:19 crc kubenswrapper[4730]: E0131 16:49:19.505251 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd25f1e4-9703-430d-96e1-9dc82dbcde4b" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.505274 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd25f1e4-9703-430d-96e1-9dc82dbcde4b" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.505547 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd25f1e4-9703-430d-96e1-9dc82dbcde4b" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.506291 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.508251 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.508446 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.508750 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.518039 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.546587 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.556464 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.567363 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.569084 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1debbac8-6d45-417c-a365-5fbe9f123d58-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1debbac8-6d45-417c-a365-5fbe9f123d58\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.569137 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9822x\" (UniqueName: \"kubernetes.io/projected/1debbac8-6d45-417c-a365-5fbe9f123d58-kube-api-access-9822x\") pod \"nova-cell1-novncproxy-0\" (UID: \"1debbac8-6d45-417c-a365-5fbe9f123d58\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.569452 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1debbac8-6d45-417c-a365-5fbe9f123d58-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1debbac8-6d45-417c-a365-5fbe9f123d58\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.569579 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1debbac8-6d45-417c-a365-5fbe9f123d58-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1debbac8-6d45-417c-a365-5fbe9f123d58\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.569680 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1debbac8-6d45-417c-a365-5fbe9f123d58-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1debbac8-6d45-417c-a365-5fbe9f123d58\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.671968 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1debbac8-6d45-417c-a365-5fbe9f123d58-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1debbac8-6d45-417c-a365-5fbe9f123d58\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.672037 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9822x\" (UniqueName: \"kubernetes.io/projected/1debbac8-6d45-417c-a365-5fbe9f123d58-kube-api-access-9822x\") pod \"nova-cell1-novncproxy-0\" (UID: \"1debbac8-6d45-417c-a365-5fbe9f123d58\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.672124 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1debbac8-6d45-417c-a365-5fbe9f123d58-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1debbac8-6d45-417c-a365-5fbe9f123d58\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.672147 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1debbac8-6d45-417c-a365-5fbe9f123d58-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1debbac8-6d45-417c-a365-5fbe9f123d58\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.672175 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1debbac8-6d45-417c-a365-5fbe9f123d58-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1debbac8-6d45-417c-a365-5fbe9f123d58\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.679579 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1debbac8-6d45-417c-a365-5fbe9f123d58-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1debbac8-6d45-417c-a365-5fbe9f123d58\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.680712 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1debbac8-6d45-417c-a365-5fbe9f123d58-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1debbac8-6d45-417c-a365-5fbe9f123d58\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.680919 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1debbac8-6d45-417c-a365-5fbe9f123d58-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1debbac8-6d45-417c-a365-5fbe9f123d58\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.685607 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1debbac8-6d45-417c-a365-5fbe9f123d58-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1debbac8-6d45-417c-a365-5fbe9f123d58\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.704161 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9822x\" (UniqueName: \"kubernetes.io/projected/1debbac8-6d45-417c-a365-5fbe9f123d58-kube-api-access-9822x\") pod \"nova-cell1-novncproxy-0\" (UID: \"1debbac8-6d45-417c-a365-5fbe9f123d58\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:19 crc kubenswrapper[4730]: I0131 16:49:19.822125 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:20 crc kubenswrapper[4730]: I0131 16:49:20.284488 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 16:49:20 crc kubenswrapper[4730]: I0131 16:49:20.432481 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1debbac8-6d45-417c-a365-5fbe9f123d58","Type":"ContainerStarted","Data":"43a59ad4cd828a0b20de1bcfaad48034eae0bf6f858a48e59f8a35bf60b9f4a3"} Jan 31 16:49:20 crc kubenswrapper[4730]: I0131 16:49:20.438832 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 16:49:20 crc kubenswrapper[4730]: I0131 16:49:20.486223 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd25f1e4-9703-430d-96e1-9dc82dbcde4b" path="/var/lib/kubelet/pods/dd25f1e4-9703-430d-96e1-9dc82dbcde4b/volumes" Jan 31 16:49:21 crc kubenswrapper[4730]: I0131 16:49:21.455848 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1debbac8-6d45-417c-a365-5fbe9f123d58","Type":"ContainerStarted","Data":"0ab3b67052c81463496a841caed4ca409272a725b55fbd50473437df20cdba9e"} Jan 31 16:49:21 crc kubenswrapper[4730]: I0131 16:49:21.485138 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.485119282 podStartE2EDuration="2.485119282s" podCreationTimestamp="2026-01-31 16:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:49:21.472027212 +0000 UTC m=+1148.278084138" watchObservedRunningTime="2026-01-31 16:49:21.485119282 +0000 UTC m=+1148.291176208" Jan 31 16:49:21 crc kubenswrapper[4730]: I0131 16:49:21.661526 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:49:21 crc kubenswrapper[4730]: I0131 16:49:21.883090 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 16:49:21 crc kubenswrapper[4730]: I0131 16:49:21.883528 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 16:49:21 crc kubenswrapper[4730]: I0131 16:49:21.884299 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 16:49:21 crc kubenswrapper[4730]: I0131 16:49:21.886726 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 16:49:22 crc kubenswrapper[4730]: I0131 16:49:22.466558 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:49:22 crc kubenswrapper[4730]: I0131 16:49:22.467022 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:49:22 crc kubenswrapper[4730]: I0131 16:49:22.467248 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:49:22 crc kubenswrapper[4730]: E0131 16:49:22.467764 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:49:22 crc kubenswrapper[4730]: I0131 16:49:22.498155 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 16:49:22 crc kubenswrapper[4730]: I0131 16:49:22.498268 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 16:49:22 crc kubenswrapper[4730]: I0131 16:49:22.756861 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95bd95597-lwsxh"] Jan 31 16:49:22 crc kubenswrapper[4730]: I0131 16:49:22.758920 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:22 crc kubenswrapper[4730]: I0131 16:49:22.785132 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95bd95597-lwsxh"] Jan 31 16:49:22 crc kubenswrapper[4730]: I0131 16:49:22.930378 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6357893e-9e12-47db-a262-966a020b4aa2-dns-svc\") pod \"dnsmasq-dns-95bd95597-lwsxh\" (UID: \"6357893e-9e12-47db-a262-966a020b4aa2\") " pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:22 crc kubenswrapper[4730]: I0131 16:49:22.930432 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6357893e-9e12-47db-a262-966a020b4aa2-ovsdbserver-nb\") pod \"dnsmasq-dns-95bd95597-lwsxh\" (UID: \"6357893e-9e12-47db-a262-966a020b4aa2\") " pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:22 crc kubenswrapper[4730]: I0131 16:49:22.930486 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htt7n\" (UniqueName: \"kubernetes.io/projected/6357893e-9e12-47db-a262-966a020b4aa2-kube-api-access-htt7n\") pod \"dnsmasq-dns-95bd95597-lwsxh\" (UID: \"6357893e-9e12-47db-a262-966a020b4aa2\") " pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:22 crc kubenswrapper[4730]: I0131 16:49:22.930628 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6357893e-9e12-47db-a262-966a020b4aa2-config\") pod \"dnsmasq-dns-95bd95597-lwsxh\" (UID: \"6357893e-9e12-47db-a262-966a020b4aa2\") " pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:22 crc kubenswrapper[4730]: I0131 16:49:22.930759 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6357893e-9e12-47db-a262-966a020b4aa2-ovsdbserver-sb\") pod \"dnsmasq-dns-95bd95597-lwsxh\" (UID: \"6357893e-9e12-47db-a262-966a020b4aa2\") " pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:23 crc kubenswrapper[4730]: I0131 16:49:23.032205 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6357893e-9e12-47db-a262-966a020b4aa2-ovsdbserver-nb\") pod \"dnsmasq-dns-95bd95597-lwsxh\" (UID: \"6357893e-9e12-47db-a262-966a020b4aa2\") " pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:23 crc kubenswrapper[4730]: I0131 16:49:23.032273 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htt7n\" (UniqueName: \"kubernetes.io/projected/6357893e-9e12-47db-a262-966a020b4aa2-kube-api-access-htt7n\") pod \"dnsmasq-dns-95bd95597-lwsxh\" (UID: \"6357893e-9e12-47db-a262-966a020b4aa2\") " pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:23 crc kubenswrapper[4730]: I0131 16:49:23.032331 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6357893e-9e12-47db-a262-966a020b4aa2-config\") pod \"dnsmasq-dns-95bd95597-lwsxh\" (UID: \"6357893e-9e12-47db-a262-966a020b4aa2\") " pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:23 crc kubenswrapper[4730]: I0131 16:49:23.032353 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6357893e-9e12-47db-a262-966a020b4aa2-ovsdbserver-sb\") pod \"dnsmasq-dns-95bd95597-lwsxh\" (UID: \"6357893e-9e12-47db-a262-966a020b4aa2\") " pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:23 crc kubenswrapper[4730]: I0131 16:49:23.032433 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6357893e-9e12-47db-a262-966a020b4aa2-dns-svc\") pod \"dnsmasq-dns-95bd95597-lwsxh\" (UID: \"6357893e-9e12-47db-a262-966a020b4aa2\") " pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:23 crc kubenswrapper[4730]: I0131 16:49:23.032977 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6357893e-9e12-47db-a262-966a020b4aa2-ovsdbserver-nb\") pod \"dnsmasq-dns-95bd95597-lwsxh\" (UID: \"6357893e-9e12-47db-a262-966a020b4aa2\") " pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:23 crc kubenswrapper[4730]: I0131 16:49:23.033078 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6357893e-9e12-47db-a262-966a020b4aa2-dns-svc\") pod \"dnsmasq-dns-95bd95597-lwsxh\" (UID: \"6357893e-9e12-47db-a262-966a020b4aa2\") " pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:23 crc kubenswrapper[4730]: I0131 16:49:23.033314 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6357893e-9e12-47db-a262-966a020b4aa2-config\") pod \"dnsmasq-dns-95bd95597-lwsxh\" (UID: \"6357893e-9e12-47db-a262-966a020b4aa2\") " pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:23 crc kubenswrapper[4730]: I0131 16:49:23.033692 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6357893e-9e12-47db-a262-966a020b4aa2-ovsdbserver-sb\") pod \"dnsmasq-dns-95bd95597-lwsxh\" (UID: \"6357893e-9e12-47db-a262-966a020b4aa2\") " pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:23 crc kubenswrapper[4730]: I0131 16:49:23.049644 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htt7n\" (UniqueName: \"kubernetes.io/projected/6357893e-9e12-47db-a262-966a020b4aa2-kube-api-access-htt7n\") pod \"dnsmasq-dns-95bd95597-lwsxh\" (UID: \"6357893e-9e12-47db-a262-966a020b4aa2\") " pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:23 crc kubenswrapper[4730]: I0131 16:49:23.079242 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:23 crc kubenswrapper[4730]: I0131 16:49:23.585331 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95bd95597-lwsxh"] Jan 31 16:49:24 crc kubenswrapper[4730]: I0131 16:49:24.487882 4730 generic.go:334] "Generic (PLEG): container finished" podID="6357893e-9e12-47db-a262-966a020b4aa2" containerID="9dcd7391f51d927bd6d2159a72ad8efbfa8bb432744952f7db05116e54647c6f" exitCode=0 Jan 31 16:49:24 crc kubenswrapper[4730]: I0131 16:49:24.487977 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95bd95597-lwsxh" event={"ID":"6357893e-9e12-47db-a262-966a020b4aa2","Type":"ContainerDied","Data":"9dcd7391f51d927bd6d2159a72ad8efbfa8bb432744952f7db05116e54647c6f"} Jan 31 16:49:24 crc kubenswrapper[4730]: I0131 16:49:24.488154 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95bd95597-lwsxh" event={"ID":"6357893e-9e12-47db-a262-966a020b4aa2","Type":"ContainerStarted","Data":"48e9c8b78f5251fcb225778fbd25cb714008b3db33add92efe255e8d1d02e41a"} Jan 31 16:49:24 crc kubenswrapper[4730]: I0131 16:49:24.657314 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:49:24 crc kubenswrapper[4730]: I0131 16:49:24.798438 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:49:24 crc kubenswrapper[4730]: I0131 16:49:24.798702 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerName="ceilometer-central-agent" containerID="cri-o://d37a9aeff855967f773a34e1e9f64bacc38510e922981068c24d9af016c7d244" gracePeriod=30 Jan 31 16:49:24 crc kubenswrapper[4730]: I0131 16:49:24.798832 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerName="sg-core" containerID="cri-o://4c2ebf57942e4aad1a5af86ffbc37ea53f8b34923aa7fe2664fbd6a3f60a706d" gracePeriod=30 Jan 31 16:49:24 crc kubenswrapper[4730]: I0131 16:49:24.798899 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerName="ceilometer-notification-agent" containerID="cri-o://bdf743837e255d1601d61593bc07a40dc6cc231db3fc239dad2155f7390c715f" gracePeriod=30 Jan 31 16:49:24 crc kubenswrapper[4730]: I0131 16:49:24.799144 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerName="proxy-httpd" containerID="cri-o://cc4c05e6b4d8610771ec684acdf6c3693072c1a9855f2a04e00ad38408eb5f13" gracePeriod=30 Jan 31 16:49:24 crc kubenswrapper[4730]: I0131 16:49:24.813050 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.204:3000/\": read tcp 10.217.0.2:37840->10.217.0.204:3000: read: connection reset by peer" Jan 31 16:49:24 crc kubenswrapper[4730]: I0131 16:49:24.823295 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:25 crc kubenswrapper[4730]: I0131 16:49:25.161991 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 16:49:25 crc kubenswrapper[4730]: I0131 16:49:25.499437 4730 generic.go:334] "Generic (PLEG): container finished" podID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerID="cc4c05e6b4d8610771ec684acdf6c3693072c1a9855f2a04e00ad38408eb5f13" exitCode=0 Jan 31 16:49:25 crc kubenswrapper[4730]: I0131 16:49:25.499738 4730 generic.go:334] "Generic (PLEG): container finished" podID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerID="4c2ebf57942e4aad1a5af86ffbc37ea53f8b34923aa7fe2664fbd6a3f60a706d" exitCode=2 Jan 31 16:49:25 crc kubenswrapper[4730]: I0131 16:49:25.499837 4730 generic.go:334] "Generic (PLEG): container finished" podID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerID="d37a9aeff855967f773a34e1e9f64bacc38510e922981068c24d9af016c7d244" exitCode=0 Jan 31 16:49:25 crc kubenswrapper[4730]: I0131 16:49:25.499498 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d18f2663-e551-4458-a5bf-3fa7c8caeaf3","Type":"ContainerDied","Data":"cc4c05e6b4d8610771ec684acdf6c3693072c1a9855f2a04e00ad38408eb5f13"} Jan 31 16:49:25 crc kubenswrapper[4730]: I0131 16:49:25.500089 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d18f2663-e551-4458-a5bf-3fa7c8caeaf3","Type":"ContainerDied","Data":"4c2ebf57942e4aad1a5af86ffbc37ea53f8b34923aa7fe2664fbd6a3f60a706d"} Jan 31 16:49:25 crc kubenswrapper[4730]: I0131 16:49:25.502445 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d18f2663-e551-4458-a5bf-3fa7c8caeaf3","Type":"ContainerDied","Data":"d37a9aeff855967f773a34e1e9f64bacc38510e922981068c24d9af016c7d244"} Jan 31 16:49:25 crc kubenswrapper[4730]: I0131 16:49:25.502730 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95bd95597-lwsxh" event={"ID":"6357893e-9e12-47db-a262-966a020b4aa2","Type":"ContainerStarted","Data":"45bcb38b8ab352a27c8c29eb6c8af3966f3afcf2fb11b669097919f0f697d7d6"} Jan 31 16:49:25 crc kubenswrapper[4730]: I0131 16:49:25.503650 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aab83ba1-24e1-48aa-bf77-8444ed3cc8b5" containerName="nova-api-log" containerID="cri-o://e8dbb3e34439801b3364ebd12b63fd76317cfe65d4079b82ebc04e7636a72e30" gracePeriod=30 Jan 31 16:49:25 crc kubenswrapper[4730]: I0131 16:49:25.503782 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aab83ba1-24e1-48aa-bf77-8444ed3cc8b5" containerName="nova-api-api" containerID="cri-o://565645dae3ccc4243eec63f6d5d0a436bcd8cdfe9ffb4921ae96ce497ee5f470" gracePeriod=30 Jan 31 16:49:25 crc kubenswrapper[4730]: I0131 16:49:25.546640 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95bd95597-lwsxh" podStartSLOduration=3.546621596 podStartE2EDuration="3.546621596s" podCreationTimestamp="2026-01-31 16:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:49:25.527124361 +0000 UTC m=+1152.333181297" watchObservedRunningTime="2026-01-31 16:49:25.546621596 +0000 UTC m=+1152.352678512" Jan 31 16:49:25 crc kubenswrapper[4730]: I0131 16:49:25.660345 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:49:26 crc kubenswrapper[4730]: I0131 16:49:26.511718 4730 generic.go:334] "Generic (PLEG): container finished" podID="aab83ba1-24e1-48aa-bf77-8444ed3cc8b5" containerID="e8dbb3e34439801b3364ebd12b63fd76317cfe65d4079b82ebc04e7636a72e30" exitCode=143 Jan 31 16:49:26 crc kubenswrapper[4730]: I0131 16:49:26.511770 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5","Type":"ContainerDied","Data":"e8dbb3e34439801b3364ebd12b63fd76317cfe65d4079b82ebc04e7636a72e30"} Jan 31 16:49:26 crc kubenswrapper[4730]: I0131 16:49:26.512065 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.441708 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.520569 4730 generic.go:334] "Generic (PLEG): container finished" podID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerID="bdf743837e255d1601d61593bc07a40dc6cc231db3fc239dad2155f7390c715f" exitCode=0 Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.520660 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d18f2663-e551-4458-a5bf-3fa7c8caeaf3","Type":"ContainerDied","Data":"bdf743837e255d1601d61593bc07a40dc6cc231db3fc239dad2155f7390c715f"} Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.520690 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d18f2663-e551-4458-a5bf-3fa7c8caeaf3","Type":"ContainerDied","Data":"399009e2c67ba035c6189e2592a508016774ee1eb66eb843dbff132331396074"} Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.520707 4730 scope.go:117] "RemoveContainer" containerID="cc4c05e6b4d8610771ec684acdf6c3693072c1a9855f2a04e00ad38408eb5f13" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.520855 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.538229 4730 scope.go:117] "RemoveContainer" containerID="4c2ebf57942e4aad1a5af86ffbc37ea53f8b34923aa7fe2664fbd6a3f60a706d" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.552742 4730 scope.go:117] "RemoveContainer" containerID="bdf743837e255d1601d61593bc07a40dc6cc231db3fc239dad2155f7390c715f" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.567962 4730 scope.go:117] "RemoveContainer" containerID="d37a9aeff855967f773a34e1e9f64bacc38510e922981068c24d9af016c7d244" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.584723 4730 scope.go:117] "RemoveContainer" containerID="cc4c05e6b4d8610771ec684acdf6c3693072c1a9855f2a04e00ad38408eb5f13" Jan 31 16:49:27 crc kubenswrapper[4730]: E0131 16:49:27.585189 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc4c05e6b4d8610771ec684acdf6c3693072c1a9855f2a04e00ad38408eb5f13\": container with ID starting with cc4c05e6b4d8610771ec684acdf6c3693072c1a9855f2a04e00ad38408eb5f13 not found: ID does not exist" containerID="cc4c05e6b4d8610771ec684acdf6c3693072c1a9855f2a04e00ad38408eb5f13" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.585237 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc4c05e6b4d8610771ec684acdf6c3693072c1a9855f2a04e00ad38408eb5f13"} err="failed to get container status \"cc4c05e6b4d8610771ec684acdf6c3693072c1a9855f2a04e00ad38408eb5f13\": rpc error: code = NotFound desc = could not find container \"cc4c05e6b4d8610771ec684acdf6c3693072c1a9855f2a04e00ad38408eb5f13\": container with ID starting with cc4c05e6b4d8610771ec684acdf6c3693072c1a9855f2a04e00ad38408eb5f13 not found: ID does not exist" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.585263 4730 scope.go:117] "RemoveContainer" containerID="4c2ebf57942e4aad1a5af86ffbc37ea53f8b34923aa7fe2664fbd6a3f60a706d" Jan 31 16:49:27 crc kubenswrapper[4730]: E0131 16:49:27.585499 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c2ebf57942e4aad1a5af86ffbc37ea53f8b34923aa7fe2664fbd6a3f60a706d\": container with ID starting with 4c2ebf57942e4aad1a5af86ffbc37ea53f8b34923aa7fe2664fbd6a3f60a706d not found: ID does not exist" containerID="4c2ebf57942e4aad1a5af86ffbc37ea53f8b34923aa7fe2664fbd6a3f60a706d" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.585536 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c2ebf57942e4aad1a5af86ffbc37ea53f8b34923aa7fe2664fbd6a3f60a706d"} err="failed to get container status \"4c2ebf57942e4aad1a5af86ffbc37ea53f8b34923aa7fe2664fbd6a3f60a706d\": rpc error: code = NotFound desc = could not find container \"4c2ebf57942e4aad1a5af86ffbc37ea53f8b34923aa7fe2664fbd6a3f60a706d\": container with ID starting with 4c2ebf57942e4aad1a5af86ffbc37ea53f8b34923aa7fe2664fbd6a3f60a706d not found: ID does not exist" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.585549 4730 scope.go:117] "RemoveContainer" containerID="bdf743837e255d1601d61593bc07a40dc6cc231db3fc239dad2155f7390c715f" Jan 31 16:49:27 crc kubenswrapper[4730]: E0131 16:49:27.585860 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdf743837e255d1601d61593bc07a40dc6cc231db3fc239dad2155f7390c715f\": container with ID starting with bdf743837e255d1601d61593bc07a40dc6cc231db3fc239dad2155f7390c715f not found: ID does not exist" containerID="bdf743837e255d1601d61593bc07a40dc6cc231db3fc239dad2155f7390c715f" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.585906 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdf743837e255d1601d61593bc07a40dc6cc231db3fc239dad2155f7390c715f"} err="failed to get container status \"bdf743837e255d1601d61593bc07a40dc6cc231db3fc239dad2155f7390c715f\": rpc error: code = NotFound desc = could not find container \"bdf743837e255d1601d61593bc07a40dc6cc231db3fc239dad2155f7390c715f\": container with ID starting with bdf743837e255d1601d61593bc07a40dc6cc231db3fc239dad2155f7390c715f not found: ID does not exist" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.585937 4730 scope.go:117] "RemoveContainer" containerID="d37a9aeff855967f773a34e1e9f64bacc38510e922981068c24d9af016c7d244" Jan 31 16:49:27 crc kubenswrapper[4730]: E0131 16:49:27.586291 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d37a9aeff855967f773a34e1e9f64bacc38510e922981068c24d9af016c7d244\": container with ID starting with d37a9aeff855967f773a34e1e9f64bacc38510e922981068c24d9af016c7d244 not found: ID does not exist" containerID="d37a9aeff855967f773a34e1e9f64bacc38510e922981068c24d9af016c7d244" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.586324 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d37a9aeff855967f773a34e1e9f64bacc38510e922981068c24d9af016c7d244"} err="failed to get container status \"d37a9aeff855967f773a34e1e9f64bacc38510e922981068c24d9af016c7d244\": rpc error: code = NotFound desc = could not find container \"d37a9aeff855967f773a34e1e9f64bacc38510e922981068c24d9af016c7d244\": container with ID starting with d37a9aeff855967f773a34e1e9f64bacc38510e922981068c24d9af016c7d244 not found: ID does not exist" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.621589 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-combined-ca-bundle\") pod \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.621723 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-run-httpd\") pod \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.621764 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-log-httpd\") pod \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.621789 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-scripts\") pod \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.621860 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-sg-core-conf-yaml\") pod \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.621952 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpfhb\" (UniqueName: \"kubernetes.io/projected/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-kube-api-access-wpfhb\") pod \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.621986 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-config-data\") pod \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.622014 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-ceilometer-tls-certs\") pod \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\" (UID: \"d18f2663-e551-4458-a5bf-3fa7c8caeaf3\") " Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.622031 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d18f2663-e551-4458-a5bf-3fa7c8caeaf3" (UID: "d18f2663-e551-4458-a5bf-3fa7c8caeaf3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.622274 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d18f2663-e551-4458-a5bf-3fa7c8caeaf3" (UID: "d18f2663-e551-4458-a5bf-3fa7c8caeaf3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.622970 4730 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.622989 4730 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.627273 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-scripts" (OuterVolumeSpecName: "scripts") pod "d18f2663-e551-4458-a5bf-3fa7c8caeaf3" (UID: "d18f2663-e551-4458-a5bf-3fa7c8caeaf3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.634540 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-kube-api-access-wpfhb" (OuterVolumeSpecName: "kube-api-access-wpfhb") pod "d18f2663-e551-4458-a5bf-3fa7c8caeaf3" (UID: "d18f2663-e551-4458-a5bf-3fa7c8caeaf3"). InnerVolumeSpecName "kube-api-access-wpfhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.663114 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d18f2663-e551-4458-a5bf-3fa7c8caeaf3" (UID: "d18f2663-e551-4458-a5bf-3fa7c8caeaf3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.668959 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.669188 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.670278 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"525d48ecc22cafce03d5c202b93966613fb4e59536345f3299c3c3aec9effd0f"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.670421 4730 scope.go:117] "RemoveContainer" containerID="565c7bd9106aad9d86ce94e5f961be95c0e35c7214bd841b8cd05f550145a58b" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.670518 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://525d48ecc22cafce03d5c202b93966613fb4e59536345f3299c3c3aec9effd0f" gracePeriod=30 Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.675201 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.678073 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d18f2663-e551-4458-a5bf-3fa7c8caeaf3" (UID: "d18f2663-e551-4458-a5bf-3fa7c8caeaf3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.718035 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d18f2663-e551-4458-a5bf-3fa7c8caeaf3" (UID: "d18f2663-e551-4458-a5bf-3fa7c8caeaf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.724353 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.724390 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.724402 4730 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.724414 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpfhb\" (UniqueName: \"kubernetes.io/projected/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-kube-api-access-wpfhb\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.724431 4730 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.743366 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-config-data" (OuterVolumeSpecName: "config-data") pod "d18f2663-e551-4458-a5bf-3fa7c8caeaf3" (UID: "d18f2663-e551-4458-a5bf-3fa7c8caeaf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:27 crc kubenswrapper[4730]: E0131 16:49:27.791395 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.829034 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d18f2663-e551-4458-a5bf-3fa7c8caeaf3-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.854204 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.865074 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.882276 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:49:27 crc kubenswrapper[4730]: E0131 16:49:27.882897 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerName="sg-core" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.882923 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerName="sg-core" Jan 31 16:49:27 crc kubenswrapper[4730]: E0131 16:49:27.882934 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerName="ceilometer-notification-agent" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.882946 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerName="ceilometer-notification-agent" Jan 31 16:49:27 crc kubenswrapper[4730]: E0131 16:49:27.882966 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerName="proxy-httpd" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.882972 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerName="proxy-httpd" Jan 31 16:49:27 crc kubenswrapper[4730]: E0131 16:49:27.883000 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerName="ceilometer-central-agent" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.883007 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerName="ceilometer-central-agent" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.883231 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerName="proxy-httpd" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.883253 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerName="sg-core" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.883271 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerName="ceilometer-central-agent" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.883283 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" containerName="ceilometer-notification-agent" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.885455 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.887896 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.888043 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.888044 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 16:49:27 crc kubenswrapper[4730]: I0131 16:49:27.912880 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.033089 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-run-httpd\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.033525 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-scripts\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.033703 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v8h2\" (UniqueName: \"kubernetes.io/projected/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-kube-api-access-4v8h2\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.033838 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.034188 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-log-httpd\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.034518 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-config-data\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.034633 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.034727 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.137447 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-log-httpd\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.138046 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-config-data\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.138347 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.138085 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-log-httpd\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.138859 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.139596 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-run-httpd\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.139887 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-scripts\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.140134 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v8h2\" (UniqueName: \"kubernetes.io/projected/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-kube-api-access-4v8h2\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.140219 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-run-httpd\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.140373 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.145109 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.145155 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-config-data\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.150342 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.150935 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-scripts\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.151118 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.165384 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v8h2\" (UniqueName: \"kubernetes.io/projected/f64b5463-38cd-4c71-b9ea-ce3c348f6b06-kube-api-access-4v8h2\") pod \"ceilometer-0\" (UID: \"f64b5463-38cd-4c71-b9ea-ce3c348f6b06\") " pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.215440 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.475750 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d18f2663-e551-4458-a5bf-3fa7c8caeaf3" path="/var/lib/kubelet/pods/d18f2663-e551-4458-a5bf-3fa7c8caeaf3/volumes" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.533096 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="525d48ecc22cafce03d5c202b93966613fb4e59536345f3299c3c3aec9effd0f" exitCode=0 Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.533153 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"525d48ecc22cafce03d5c202b93966613fb4e59536345f3299c3c3aec9effd0f"} Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.533184 4730 scope.go:117] "RemoveContainer" containerID="cfa1bb05f3641c975fee4a59cca6327c8fe5928e7e78248e3be9568518a9568f" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.533726 4730 scope.go:117] "RemoveContainer" containerID="525d48ecc22cafce03d5c202b93966613fb4e59536345f3299c3c3aec9effd0f" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.533751 4730 scope.go:117] "RemoveContainer" containerID="565c7bd9106aad9d86ce94e5f961be95c0e35c7214bd841b8cd05f550145a58b" Jan 31 16:49:28 crc kubenswrapper[4730]: E0131 16:49:28.534041 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.664767 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 16:49:28 crc kubenswrapper[4730]: W0131 16:49:28.711735 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf64b5463_38cd_4c71_b9ea_ce3c348f6b06.slice/crio-bc9ee7e749bf1fdac0b556a877b71ee5a25103c4831112d863b96ac5e1b54e69 WatchSource:0}: Error finding container bc9ee7e749bf1fdac0b556a877b71ee5a25103c4831112d863b96ac5e1b54e69: Status 404 returned error can't find the container with id bc9ee7e749bf1fdac0b556a877b71ee5a25103c4831112d863b96ac5e1b54e69 Jan 31 16:49:28 crc kubenswrapper[4730]: I0131 16:49:28.991960 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.072127 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-combined-ca-bundle\") pod \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\" (UID: \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\") " Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.072181 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-config-data\") pod \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\" (UID: \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\") " Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.072212 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-logs\") pod \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\" (UID: \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\") " Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.072322 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsddc\" (UniqueName: \"kubernetes.io/projected/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-kube-api-access-gsddc\") pod \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\" (UID: \"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5\") " Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.073232 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-logs" (OuterVolumeSpecName: "logs") pod "aab83ba1-24e1-48aa-bf77-8444ed3cc8b5" (UID: "aab83ba1-24e1-48aa-bf77-8444ed3cc8b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.083963 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-kube-api-access-gsddc" (OuterVolumeSpecName: "kube-api-access-gsddc") pod "aab83ba1-24e1-48aa-bf77-8444ed3cc8b5" (UID: "aab83ba1-24e1-48aa-bf77-8444ed3cc8b5"). InnerVolumeSpecName "kube-api-access-gsddc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.110828 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-config-data" (OuterVolumeSpecName: "config-data") pod "aab83ba1-24e1-48aa-bf77-8444ed3cc8b5" (UID: "aab83ba1-24e1-48aa-bf77-8444ed3cc8b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.151976 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aab83ba1-24e1-48aa-bf77-8444ed3cc8b5" (UID: "aab83ba1-24e1-48aa-bf77-8444ed3cc8b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.183656 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsddc\" (UniqueName: \"kubernetes.io/projected/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-kube-api-access-gsddc\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.183692 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.183701 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.183711 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.552087 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f64b5463-38cd-4c71-b9ea-ce3c348f6b06","Type":"ContainerStarted","Data":"52bbba3aa77d2352c361c5d90fa6dfa8da791c53772480824b0df8cec684fed6"} Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.552458 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f64b5463-38cd-4c71-b9ea-ce3c348f6b06","Type":"ContainerStarted","Data":"bc9ee7e749bf1fdac0b556a877b71ee5a25103c4831112d863b96ac5e1b54e69"} Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.555428 4730 generic.go:334] "Generic (PLEG): container finished" podID="aab83ba1-24e1-48aa-bf77-8444ed3cc8b5" containerID="565645dae3ccc4243eec63f6d5d0a436bcd8cdfe9ffb4921ae96ce497ee5f470" exitCode=0 Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.555517 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5","Type":"ContainerDied","Data":"565645dae3ccc4243eec63f6d5d0a436bcd8cdfe9ffb4921ae96ce497ee5f470"} Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.555549 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aab83ba1-24e1-48aa-bf77-8444ed3cc8b5","Type":"ContainerDied","Data":"0b9841f74a584795ad0f25a7e87792a7f100ae4f9f6c3fc6657d11b2f1f2add8"} Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.555570 4730 scope.go:117] "RemoveContainer" containerID="565645dae3ccc4243eec63f6d5d0a436bcd8cdfe9ffb4921ae96ce497ee5f470" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.555712 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.596568 4730 scope.go:117] "RemoveContainer" containerID="e8dbb3e34439801b3364ebd12b63fd76317cfe65d4079b82ebc04e7636a72e30" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.611638 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.624061 4730 scope.go:117] "RemoveContainer" containerID="565645dae3ccc4243eec63f6d5d0a436bcd8cdfe9ffb4921ae96ce497ee5f470" Jan 31 16:49:29 crc kubenswrapper[4730]: E0131 16:49:29.624381 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"565645dae3ccc4243eec63f6d5d0a436bcd8cdfe9ffb4921ae96ce497ee5f470\": container with ID starting with 565645dae3ccc4243eec63f6d5d0a436bcd8cdfe9ffb4921ae96ce497ee5f470 not found: ID does not exist" containerID="565645dae3ccc4243eec63f6d5d0a436bcd8cdfe9ffb4921ae96ce497ee5f470" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.624416 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"565645dae3ccc4243eec63f6d5d0a436bcd8cdfe9ffb4921ae96ce497ee5f470"} err="failed to get container status \"565645dae3ccc4243eec63f6d5d0a436bcd8cdfe9ffb4921ae96ce497ee5f470\": rpc error: code = NotFound desc = could not find container \"565645dae3ccc4243eec63f6d5d0a436bcd8cdfe9ffb4921ae96ce497ee5f470\": container with ID starting with 565645dae3ccc4243eec63f6d5d0a436bcd8cdfe9ffb4921ae96ce497ee5f470 not found: ID does not exist" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.624443 4730 scope.go:117] "RemoveContainer" containerID="e8dbb3e34439801b3364ebd12b63fd76317cfe65d4079b82ebc04e7636a72e30" Jan 31 16:49:29 crc kubenswrapper[4730]: E0131 16:49:29.625035 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8dbb3e34439801b3364ebd12b63fd76317cfe65d4079b82ebc04e7636a72e30\": container with ID starting with e8dbb3e34439801b3364ebd12b63fd76317cfe65d4079b82ebc04e7636a72e30 not found: ID does not exist" containerID="e8dbb3e34439801b3364ebd12b63fd76317cfe65d4079b82ebc04e7636a72e30" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.625056 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8dbb3e34439801b3364ebd12b63fd76317cfe65d4079b82ebc04e7636a72e30"} err="failed to get container status \"e8dbb3e34439801b3364ebd12b63fd76317cfe65d4079b82ebc04e7636a72e30\": rpc error: code = NotFound desc = could not find container \"e8dbb3e34439801b3364ebd12b63fd76317cfe65d4079b82ebc04e7636a72e30\": container with ID starting with e8dbb3e34439801b3364ebd12b63fd76317cfe65d4079b82ebc04e7636a72e30 not found: ID does not exist" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.626901 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.662261 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 16:49:29 crc kubenswrapper[4730]: E0131 16:49:29.662640 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab83ba1-24e1-48aa-bf77-8444ed3cc8b5" containerName="nova-api-log" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.662655 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab83ba1-24e1-48aa-bf77-8444ed3cc8b5" containerName="nova-api-log" Jan 31 16:49:29 crc kubenswrapper[4730]: E0131 16:49:29.662673 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab83ba1-24e1-48aa-bf77-8444ed3cc8b5" containerName="nova-api-api" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.662680 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab83ba1-24e1-48aa-bf77-8444ed3cc8b5" containerName="nova-api-api" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.662918 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="aab83ba1-24e1-48aa-bf77-8444ed3cc8b5" containerName="nova-api-api" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.662939 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="aab83ba1-24e1-48aa-bf77-8444ed3cc8b5" containerName="nova-api-log" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.664357 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.672492 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.672825 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.680356 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.699096 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-config-data\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.699153 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-logs\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.699239 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcfc6\" (UniqueName: \"kubernetes.io/projected/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-kube-api-access-mcfc6\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.699283 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.699313 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-public-tls-certs\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.699332 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.712760 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.803112 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcfc6\" (UniqueName: \"kubernetes.io/projected/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-kube-api-access-mcfc6\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.803177 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.803216 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-public-tls-certs\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.803240 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.803262 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-config-data\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.803879 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-logs\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.804202 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-logs\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.809340 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.809686 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-config-data\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.814334 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-public-tls-certs\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.814919 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.829933 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.847363 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcfc6\" (UniqueName: \"kubernetes.io/projected/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-kube-api-access-mcfc6\") pod \"nova-api-0\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " pod="openstack/nova-api-0" Jan 31 16:49:29 crc kubenswrapper[4730]: I0131 16:49:29.981387 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.032833 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.474753 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aab83ba1-24e1-48aa-bf77-8444ed3cc8b5" path="/var/lib/kubelet/pods/aab83ba1-24e1-48aa-bf77-8444ed3cc8b5/volumes" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.475847 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 16:49:30 crc kubenswrapper[4730]: W0131 16:49:30.483714 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1787dcd4_7e92_43e1_97bd_fbd6de7f1ff3.slice/crio-0fcaab9a78245823f7040d3035e7457f2c50521f626799e2c1bda213d2ce6cc9 WatchSource:0}: Error finding container 0fcaab9a78245823f7040d3035e7457f2c50521f626799e2c1bda213d2ce6cc9: Status 404 returned error can't find the container with id 0fcaab9a78245823f7040d3035e7457f2c50521f626799e2c1bda213d2ce6cc9 Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.569718 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3","Type":"ContainerStarted","Data":"0fcaab9a78245823f7040d3035e7457f2c50521f626799e2c1bda213d2ce6cc9"} Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.572531 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f64b5463-38cd-4c71-b9ea-ce3c348f6b06","Type":"ContainerStarted","Data":"e0a7597f210734abd839694208d116cb9aa21858be50c1761a87b4f5cd9902a1"} Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.590189 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.810912 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-tf7gr"] Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.812427 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tf7gr" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.816081 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.816987 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.820328 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tf7gr\" (UID: \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\") " pod="openstack/nova-cell1-cell-mapping-tf7gr" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.820383 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-config-data\") pod \"nova-cell1-cell-mapping-tf7gr\" (UID: \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\") " pod="openstack/nova-cell1-cell-mapping-tf7gr" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.820415 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl72p\" (UniqueName: \"kubernetes.io/projected/f176fb26-f0f7-4a29-9963-d1e2d27805e2-kube-api-access-kl72p\") pod \"nova-cell1-cell-mapping-tf7gr\" (UID: \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\") " pod="openstack/nova-cell1-cell-mapping-tf7gr" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.820497 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-scripts\") pod \"nova-cell1-cell-mapping-tf7gr\" (UID: \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\") " pod="openstack/nova-cell1-cell-mapping-tf7gr" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.835905 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tf7gr"] Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.922856 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tf7gr\" (UID: \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\") " pod="openstack/nova-cell1-cell-mapping-tf7gr" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.923700 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-config-data\") pod \"nova-cell1-cell-mapping-tf7gr\" (UID: \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\") " pod="openstack/nova-cell1-cell-mapping-tf7gr" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.923830 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl72p\" (UniqueName: \"kubernetes.io/projected/f176fb26-f0f7-4a29-9963-d1e2d27805e2-kube-api-access-kl72p\") pod \"nova-cell1-cell-mapping-tf7gr\" (UID: \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\") " pod="openstack/nova-cell1-cell-mapping-tf7gr" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.923974 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-scripts\") pod \"nova-cell1-cell-mapping-tf7gr\" (UID: \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\") " pod="openstack/nova-cell1-cell-mapping-tf7gr" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.959418 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl72p\" (UniqueName: \"kubernetes.io/projected/f176fb26-f0f7-4a29-9963-d1e2d27805e2-kube-api-access-kl72p\") pod \"nova-cell1-cell-mapping-tf7gr\" (UID: \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\") " pod="openstack/nova-cell1-cell-mapping-tf7gr" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.962415 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tf7gr\" (UID: \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\") " pod="openstack/nova-cell1-cell-mapping-tf7gr" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.963311 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-config-data\") pod \"nova-cell1-cell-mapping-tf7gr\" (UID: \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\") " pod="openstack/nova-cell1-cell-mapping-tf7gr" Jan 31 16:49:30 crc kubenswrapper[4730]: I0131 16:49:30.963871 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-scripts\") pod \"nova-cell1-cell-mapping-tf7gr\" (UID: \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\") " pod="openstack/nova-cell1-cell-mapping-tf7gr" Jan 31 16:49:31 crc kubenswrapper[4730]: I0131 16:49:31.213755 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tf7gr" Jan 31 16:49:31 crc kubenswrapper[4730]: I0131 16:49:31.625410 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f64b5463-38cd-4c71-b9ea-ce3c348f6b06","Type":"ContainerStarted","Data":"af3ad4946d403977b1c3a363229091b8d1515d19a66dddf63ad581c905c5e098"} Jan 31 16:49:31 crc kubenswrapper[4730]: I0131 16:49:31.634057 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3","Type":"ContainerStarted","Data":"c97089a23a6c29990274e3123b803082b780944b17217c01debf09eebc67230d"} Jan 31 16:49:31 crc kubenswrapper[4730]: I0131 16:49:31.634083 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3","Type":"ContainerStarted","Data":"ebbe0b53b96b9b99998df66a005a4b19c3b7c2936a4b35225f5ecf872890775e"} Jan 31 16:49:31 crc kubenswrapper[4730]: I0131 16:49:31.652875 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.652852442 podStartE2EDuration="2.652852442s" podCreationTimestamp="2026-01-31 16:49:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:49:31.650829776 +0000 UTC m=+1158.456886692" watchObservedRunningTime="2026-01-31 16:49:31.652852442 +0000 UTC m=+1158.458909358" Jan 31 16:49:31 crc kubenswrapper[4730]: I0131 16:49:31.749564 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tf7gr"] Jan 31 16:49:31 crc kubenswrapper[4730]: W0131 16:49:31.755912 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf176fb26_f0f7_4a29_9963_d1e2d27805e2.slice/crio-f2829ebc66003e3f8af91a0f469f301f6a546f83d7ac7780fe1bc846f94e8724 WatchSource:0}: Error finding container f2829ebc66003e3f8af91a0f469f301f6a546f83d7ac7780fe1bc846f94e8724: Status 404 returned error can't find the container with id f2829ebc66003e3f8af91a0f469f301f6a546f83d7ac7780fe1bc846f94e8724 Jan 31 16:49:32 crc kubenswrapper[4730]: I0131 16:49:32.648943 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tf7gr" event={"ID":"f176fb26-f0f7-4a29-9963-d1e2d27805e2","Type":"ContainerStarted","Data":"9b0114ec1e0ac2a3934568aeddd701539ff88ff97dad0e68c7f0988adc8c7474"} Jan 31 16:49:32 crc kubenswrapper[4730]: I0131 16:49:32.649288 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tf7gr" event={"ID":"f176fb26-f0f7-4a29-9963-d1e2d27805e2","Type":"ContainerStarted","Data":"f2829ebc66003e3f8af91a0f469f301f6a546f83d7ac7780fe1bc846f94e8724"} Jan 31 16:49:32 crc kubenswrapper[4730]: I0131 16:49:32.673324 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-tf7gr" podStartSLOduration=2.673307205 podStartE2EDuration="2.673307205s" podCreationTimestamp="2026-01-31 16:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:49:32.665547792 +0000 UTC m=+1159.471604758" watchObservedRunningTime="2026-01-31 16:49:32.673307205 +0000 UTC m=+1159.479364121" Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.081857 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95bd95597-lwsxh" Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.172110 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56d99cc479-v686n"] Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.172327 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56d99cc479-v686n" podUID="fb0d8830-2b7d-4646-9973-9f72e59222bc" containerName="dnsmasq-dns" containerID="cri-o://0b5703d3ce0ea318286f6b16d7d34bdca84447492bffca251f523dd5b1a385f7" gracePeriod=10 Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.665591 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f64b5463-38cd-4c71-b9ea-ce3c348f6b06","Type":"ContainerStarted","Data":"386bf69041f6ef3b99ced30cc106e64257b7a216b6ccb68628189b9b2229bde7"} Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.667663 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.676701 4730 generic.go:334] "Generic (PLEG): container finished" podID="fb0d8830-2b7d-4646-9973-9f72e59222bc" containerID="0b5703d3ce0ea318286f6b16d7d34bdca84447492bffca251f523dd5b1a385f7" exitCode=0 Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.677106 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d99cc479-v686n" event={"ID":"fb0d8830-2b7d-4646-9973-9f72e59222bc","Type":"ContainerDied","Data":"0b5703d3ce0ea318286f6b16d7d34bdca84447492bffca251f523dd5b1a385f7"} Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.700680 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.276123984 podStartE2EDuration="6.700656958s" podCreationTimestamp="2026-01-31 16:49:27 +0000 UTC" firstStartedPulling="2026-01-31 16:49:28.730241393 +0000 UTC m=+1155.536298309" lastFinishedPulling="2026-01-31 16:49:33.154774367 +0000 UTC m=+1159.960831283" observedRunningTime="2026-01-31 16:49:33.689227234 +0000 UTC m=+1160.495284150" watchObservedRunningTime="2026-01-31 16:49:33.700656958 +0000 UTC m=+1160.506713874" Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.703509 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.788197 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-dns-svc\") pod \"fb0d8830-2b7d-4646-9973-9f72e59222bc\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.788290 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-ovsdbserver-sb\") pod \"fb0d8830-2b7d-4646-9973-9f72e59222bc\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.788358 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-ovsdbserver-nb\") pod \"fb0d8830-2b7d-4646-9973-9f72e59222bc\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.788382 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c74d4\" (UniqueName: \"kubernetes.io/projected/fb0d8830-2b7d-4646-9973-9f72e59222bc-kube-api-access-c74d4\") pod \"fb0d8830-2b7d-4646-9973-9f72e59222bc\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.788471 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-config\") pod \"fb0d8830-2b7d-4646-9973-9f72e59222bc\" (UID: \"fb0d8830-2b7d-4646-9973-9f72e59222bc\") " Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.800991 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb0d8830-2b7d-4646-9973-9f72e59222bc-kube-api-access-c74d4" (OuterVolumeSpecName: "kube-api-access-c74d4") pod "fb0d8830-2b7d-4646-9973-9f72e59222bc" (UID: "fb0d8830-2b7d-4646-9973-9f72e59222bc"). InnerVolumeSpecName "kube-api-access-c74d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.856096 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fb0d8830-2b7d-4646-9973-9f72e59222bc" (UID: "fb0d8830-2b7d-4646-9973-9f72e59222bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.865327 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fb0d8830-2b7d-4646-9973-9f72e59222bc" (UID: "fb0d8830-2b7d-4646-9973-9f72e59222bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.869886 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-config" (OuterVolumeSpecName: "config") pod "fb0d8830-2b7d-4646-9973-9f72e59222bc" (UID: "fb0d8830-2b7d-4646-9973-9f72e59222bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.875324 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fb0d8830-2b7d-4646-9973-9f72e59222bc" (UID: "fb0d8830-2b7d-4646-9973-9f72e59222bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.890417 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-config\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.890437 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.890447 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.890458 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb0d8830-2b7d-4646-9973-9f72e59222bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:33 crc kubenswrapper[4730]: I0131 16:49:33.890467 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c74d4\" (UniqueName: \"kubernetes.io/projected/fb0d8830-2b7d-4646-9973-9f72e59222bc-kube-api-access-c74d4\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:34 crc kubenswrapper[4730]: I0131 16:49:34.686567 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d99cc479-v686n" event={"ID":"fb0d8830-2b7d-4646-9973-9f72e59222bc","Type":"ContainerDied","Data":"4f3a4d20a5baaddd798296b977ac8bb567f5acfcd35aa38ae935787150e21b80"} Jan 31 16:49:34 crc kubenswrapper[4730]: I0131 16:49:34.686629 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d99cc479-v686n" Jan 31 16:49:34 crc kubenswrapper[4730]: I0131 16:49:34.686877 4730 scope.go:117] "RemoveContainer" containerID="0b5703d3ce0ea318286f6b16d7d34bdca84447492bffca251f523dd5b1a385f7" Jan 31 16:49:34 crc kubenswrapper[4730]: I0131 16:49:34.710392 4730 scope.go:117] "RemoveContainer" containerID="7247ad9a1c5d1da5f50bf9cf47f358cc3f7973abe8066ae7a04b1940b435ed3e" Jan 31 16:49:34 crc kubenswrapper[4730]: I0131 16:49:34.713640 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56d99cc479-v686n"] Jan 31 16:49:34 crc kubenswrapper[4730]: I0131 16:49:34.722810 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56d99cc479-v686n"] Jan 31 16:49:36 crc kubenswrapper[4730]: I0131 16:49:36.483669 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb0d8830-2b7d-4646-9973-9f72e59222bc" path="/var/lib/kubelet/pods/fb0d8830-2b7d-4646-9973-9f72e59222bc/volumes" Jan 31 16:49:37 crc kubenswrapper[4730]: I0131 16:49:37.466778 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:49:37 crc kubenswrapper[4730]: I0131 16:49:37.467265 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:49:37 crc kubenswrapper[4730]: I0131 16:49:37.467454 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:49:37 crc kubenswrapper[4730]: E0131 16:49:37.468747 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:49:37 crc kubenswrapper[4730]: I0131 16:49:37.729368 4730 generic.go:334] "Generic (PLEG): container finished" podID="f176fb26-f0f7-4a29-9963-d1e2d27805e2" containerID="9b0114ec1e0ac2a3934568aeddd701539ff88ff97dad0e68c7f0988adc8c7474" exitCode=0 Jan 31 16:49:37 crc kubenswrapper[4730]: I0131 16:49:37.729448 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tf7gr" event={"ID":"f176fb26-f0f7-4a29-9963-d1e2d27805e2","Type":"ContainerDied","Data":"9b0114ec1e0ac2a3934568aeddd701539ff88ff97dad0e68c7f0988adc8c7474"} Jan 31 16:49:38 crc kubenswrapper[4730]: I0131 16:49:38.588873 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56d99cc479-v686n" podUID="fb0d8830-2b7d-4646-9973-9f72e59222bc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.196:5353: i/o timeout" Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.189289 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tf7gr" Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.195842 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-combined-ca-bundle\") pod \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\" (UID: \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\") " Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.195912 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-config-data\") pod \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\" (UID: \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\") " Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.195962 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl72p\" (UniqueName: \"kubernetes.io/projected/f176fb26-f0f7-4a29-9963-d1e2d27805e2-kube-api-access-kl72p\") pod \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\" (UID: \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\") " Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.195979 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-scripts\") pod \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\" (UID: \"f176fb26-f0f7-4a29-9963-d1e2d27805e2\") " Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.201481 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-scripts" (OuterVolumeSpecName: "scripts") pod "f176fb26-f0f7-4a29-9963-d1e2d27805e2" (UID: "f176fb26-f0f7-4a29-9963-d1e2d27805e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.208851 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f176fb26-f0f7-4a29-9963-d1e2d27805e2-kube-api-access-kl72p" (OuterVolumeSpecName: "kube-api-access-kl72p") pod "f176fb26-f0f7-4a29-9963-d1e2d27805e2" (UID: "f176fb26-f0f7-4a29-9963-d1e2d27805e2"). InnerVolumeSpecName "kube-api-access-kl72p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.230528 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f176fb26-f0f7-4a29-9963-d1e2d27805e2" (UID: "f176fb26-f0f7-4a29-9963-d1e2d27805e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.230540 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-config-data" (OuterVolumeSpecName: "config-data") pod "f176fb26-f0f7-4a29-9963-d1e2d27805e2" (UID: "f176fb26-f0f7-4a29-9963-d1e2d27805e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.298172 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.298213 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.298226 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl72p\" (UniqueName: \"kubernetes.io/projected/f176fb26-f0f7-4a29-9963-d1e2d27805e2-kube-api-access-kl72p\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.298238 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f176fb26-f0f7-4a29-9963-d1e2d27805e2-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.753042 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tf7gr" event={"ID":"f176fb26-f0f7-4a29-9963-d1e2d27805e2","Type":"ContainerDied","Data":"f2829ebc66003e3f8af91a0f469f301f6a546f83d7ac7780fe1bc846f94e8724"} Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.753345 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2829ebc66003e3f8af91a0f469f301f6a546f83d7ac7780fe1bc846f94e8724" Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.753117 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tf7gr" Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.957290 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.957731 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3" containerName="nova-api-log" containerID="cri-o://ebbe0b53b96b9b99998df66a005a4b19c3b7c2936a4b35225f5ecf872890775e" gracePeriod=30 Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.957986 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3" containerName="nova-api-api" containerID="cri-o://c97089a23a6c29990274e3123b803082b780944b17217c01debf09eebc67230d" gracePeriod=30 Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.966924 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 16:49:39 crc kubenswrapper[4730]: I0131 16:49:39.967126 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8d46a81f-3e6a-4035-869e-db235995f42e" containerName="nova-scheduler-scheduler" containerID="cri-o://af98163e7d9109addbd7edb7aa3afdfdd8c921301fb40d7a7055faeeaa3f19b7" gracePeriod=30 Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.045388 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.045629 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="44259ad5-956e-4e78-8564-238063ce2747" containerName="nova-metadata-log" containerID="cri-o://d1ec40b4b1eefd9124c5cafaad268776776156d11f748798e62100418aec2bb7" gracePeriod=30 Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.046030 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="44259ad5-956e-4e78-8564-238063ce2747" containerName="nova-metadata-metadata" containerID="cri-o://4c7724b37a010451d6528b1a892ccda05d0a8a04c76ded9b741679f0c6a14caf" gracePeriod=30 Jan 31 16:49:40 crc kubenswrapper[4730]: E0131 16:49:40.246859 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44259ad5_956e_4e78_8564_238063ce2747.slice/crio-d1ec40b4b1eefd9124c5cafaad268776776156d11f748798e62100418aec2bb7.scope\": RecentStats: unable to find data in memory cache]" Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.763250 4730 generic.go:334] "Generic (PLEG): container finished" podID="44259ad5-956e-4e78-8564-238063ce2747" containerID="d1ec40b4b1eefd9124c5cafaad268776776156d11f748798e62100418aec2bb7" exitCode=143 Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.763331 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44259ad5-956e-4e78-8564-238063ce2747","Type":"ContainerDied","Data":"d1ec40b4b1eefd9124c5cafaad268776776156d11f748798e62100418aec2bb7"} Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.765655 4730 generic.go:334] "Generic (PLEG): container finished" podID="1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3" containerID="c97089a23a6c29990274e3123b803082b780944b17217c01debf09eebc67230d" exitCode=0 Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.765683 4730 generic.go:334] "Generic (PLEG): container finished" podID="1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3" containerID="ebbe0b53b96b9b99998df66a005a4b19c3b7c2936a4b35225f5ecf872890775e" exitCode=143 Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.765684 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3","Type":"ContainerDied","Data":"c97089a23a6c29990274e3123b803082b780944b17217c01debf09eebc67230d"} Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.765716 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3","Type":"ContainerDied","Data":"ebbe0b53b96b9b99998df66a005a4b19c3b7c2936a4b35225f5ecf872890775e"} Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.765726 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3","Type":"ContainerDied","Data":"0fcaab9a78245823f7040d3035e7457f2c50521f626799e2c1bda213d2ce6cc9"} Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.765737 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fcaab9a78245823f7040d3035e7457f2c50521f626799e2c1bda213d2ce6cc9" Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.783952 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.924978 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-config-data\") pod \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.925034 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-combined-ca-bundle\") pod \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.925106 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcfc6\" (UniqueName: \"kubernetes.io/projected/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-kube-api-access-mcfc6\") pod \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.925251 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-internal-tls-certs\") pod \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.925307 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-logs\") pod \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.925349 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-public-tls-certs\") pod \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\" (UID: \"1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3\") " Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.926010 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-logs" (OuterVolumeSpecName: "logs") pod "1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3" (UID: "1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:49:40 crc kubenswrapper[4730]: I0131 16:49:40.933546 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-kube-api-access-mcfc6" (OuterVolumeSpecName: "kube-api-access-mcfc6") pod "1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3" (UID: "1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3"). InnerVolumeSpecName "kube-api-access-mcfc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.027394 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcfc6\" (UniqueName: \"kubernetes.io/projected/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-kube-api-access-mcfc6\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.027605 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.028817 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3" (UID: "1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.028929 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-config-data" (OuterVolumeSpecName: "config-data") pod "1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3" (UID: "1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.047925 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3" (UID: "1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.061881 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3" (UID: "1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.128984 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.129027 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.129037 4730 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.129046 4730 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.464866 4730 scope.go:117] "RemoveContainer" containerID="525d48ecc22cafce03d5c202b93966613fb4e59536345f3299c3c3aec9effd0f" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.464895 4730 scope.go:117] "RemoveContainer" containerID="565c7bd9106aad9d86ce94e5f961be95c0e35c7214bd841b8cd05f550145a58b" Jan 31 16:49:41 crc kubenswrapper[4730]: E0131 16:49:41.465287 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:49:41 crc kubenswrapper[4730]: E0131 16:49:41.550976 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="af98163e7d9109addbd7edb7aa3afdfdd8c921301fb40d7a7055faeeaa3f19b7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 16:49:41 crc kubenswrapper[4730]: E0131 16:49:41.553303 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="af98163e7d9109addbd7edb7aa3afdfdd8c921301fb40d7a7055faeeaa3f19b7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 16:49:41 crc kubenswrapper[4730]: E0131 16:49:41.554957 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="af98163e7d9109addbd7edb7aa3afdfdd8c921301fb40d7a7055faeeaa3f19b7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 16:49:41 crc kubenswrapper[4730]: E0131 16:49:41.555073 4730 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8d46a81f-3e6a-4035-869e-db235995f42e" containerName="nova-scheduler-scheduler" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.772785 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.801162 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.812499 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.825072 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 16:49:41 crc kubenswrapper[4730]: E0131 16:49:41.825617 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f176fb26-f0f7-4a29-9963-d1e2d27805e2" containerName="nova-manage" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.825689 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f176fb26-f0f7-4a29-9963-d1e2d27805e2" containerName="nova-manage" Jan 31 16:49:41 crc kubenswrapper[4730]: E0131 16:49:41.825819 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb0d8830-2b7d-4646-9973-9f72e59222bc" containerName="init" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.825876 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb0d8830-2b7d-4646-9973-9f72e59222bc" containerName="init" Jan 31 16:49:41 crc kubenswrapper[4730]: E0131 16:49:41.825934 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3" containerName="nova-api-api" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.825997 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3" containerName="nova-api-api" Jan 31 16:49:41 crc kubenswrapper[4730]: E0131 16:49:41.826059 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3" containerName="nova-api-log" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.826106 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3" containerName="nova-api-log" Jan 31 16:49:41 crc kubenswrapper[4730]: E0131 16:49:41.826166 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb0d8830-2b7d-4646-9973-9f72e59222bc" containerName="dnsmasq-dns" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.826218 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb0d8830-2b7d-4646-9973-9f72e59222bc" containerName="dnsmasq-dns" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.826435 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb0d8830-2b7d-4646-9973-9f72e59222bc" containerName="dnsmasq-dns" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.826528 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f176fb26-f0f7-4a29-9963-d1e2d27805e2" containerName="nova-manage" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.826599 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3" containerName="nova-api-log" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.826652 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3" containerName="nova-api-api" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.827632 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.829933 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.830534 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.830619 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.842929 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.944137 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.944195 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-internal-tls-certs\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.944217 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-config-data\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.944309 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-public-tls-certs\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.944339 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-logs\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:41 crc kubenswrapper[4730]: I0131 16:49:41.944371 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx2k4\" (UniqueName: \"kubernetes.io/projected/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-kube-api-access-fx2k4\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:42 crc kubenswrapper[4730]: I0131 16:49:42.045613 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-public-tls-certs\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:42 crc kubenswrapper[4730]: I0131 16:49:42.046006 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-logs\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:42 crc kubenswrapper[4730]: I0131 16:49:42.046050 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx2k4\" (UniqueName: \"kubernetes.io/projected/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-kube-api-access-fx2k4\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:42 crc kubenswrapper[4730]: I0131 16:49:42.046114 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:42 crc kubenswrapper[4730]: I0131 16:49:42.046153 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-internal-tls-certs\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:42 crc kubenswrapper[4730]: I0131 16:49:42.046179 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-config-data\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:42 crc kubenswrapper[4730]: I0131 16:49:42.046348 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-logs\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:42 crc kubenswrapper[4730]: I0131 16:49:42.054019 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-public-tls-certs\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:42 crc kubenswrapper[4730]: I0131 16:49:42.054034 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:42 crc kubenswrapper[4730]: I0131 16:49:42.062364 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-config-data\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:42 crc kubenswrapper[4730]: I0131 16:49:42.066365 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-internal-tls-certs\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:42 crc kubenswrapper[4730]: I0131 16:49:42.067279 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx2k4\" (UniqueName: \"kubernetes.io/projected/63a7e1f3-1bc8-429e-a94c-729bc81d12ac-kube-api-access-fx2k4\") pod \"nova-api-0\" (UID: \"63a7e1f3-1bc8-429e-a94c-729bc81d12ac\") " pod="openstack/nova-api-0" Jan 31 16:49:42 crc kubenswrapper[4730]: I0131 16:49:42.146000 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 16:49:42 crc kubenswrapper[4730]: I0131 16:49:42.473406 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3" path="/var/lib/kubelet/pods/1787dcd4-7e92-43e1-97bd-fbd6de7f1ff3/volumes" Jan 31 16:49:42 crc kubenswrapper[4730]: I0131 16:49:42.600068 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 16:49:42 crc kubenswrapper[4730]: I0131 16:49:42.783778 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63a7e1f3-1bc8-429e-a94c-729bc81d12ac","Type":"ContainerStarted","Data":"41be0843d01d0659188ebbb6888ae59e665d77c671d57beab13cf94d05edae2f"} Jan 31 16:49:43 crc kubenswrapper[4730]: I0131 16:49:43.803914 4730 generic.go:334] "Generic (PLEG): container finished" podID="44259ad5-956e-4e78-8564-238063ce2747" containerID="4c7724b37a010451d6528b1a892ccda05d0a8a04c76ded9b741679f0c6a14caf" exitCode=0 Jan 31 16:49:43 crc kubenswrapper[4730]: I0131 16:49:43.804241 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44259ad5-956e-4e78-8564-238063ce2747","Type":"ContainerDied","Data":"4c7724b37a010451d6528b1a892ccda05d0a8a04c76ded9b741679f0c6a14caf"} Jan 31 16:49:43 crc kubenswrapper[4730]: I0131 16:49:43.807005 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63a7e1f3-1bc8-429e-a94c-729bc81d12ac","Type":"ContainerStarted","Data":"cff18a43beddd9c809e84cd2f0bf69c3d5c3235f9b98d0649cf8ec365c12c703"} Jan 31 16:49:43 crc kubenswrapper[4730]: I0131 16:49:43.807036 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63a7e1f3-1bc8-429e-a94c-729bc81d12ac","Type":"ContainerStarted","Data":"48dea53244f33a5efb453264489fe59dccb6543afd01c999f50cd09cfd264e83"} Jan 31 16:49:43 crc kubenswrapper[4730]: I0131 16:49:43.830085 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.830070978 podStartE2EDuration="2.830070978s" podCreationTimestamp="2026-01-31 16:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:49:43.826711526 +0000 UTC m=+1170.632768452" watchObservedRunningTime="2026-01-31 16:49:43.830070978 +0000 UTC m=+1170.636127894" Jan 31 16:49:43 crc kubenswrapper[4730]: I0131 16:49:43.887852 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 16:49:43 crc kubenswrapper[4730]: I0131 16:49:43.993683 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44259ad5-956e-4e78-8564-238063ce2747-logs\") pod \"44259ad5-956e-4e78-8564-238063ce2747\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " Jan 31 16:49:43 crc kubenswrapper[4730]: I0131 16:49:43.993736 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-combined-ca-bundle\") pod \"44259ad5-956e-4e78-8564-238063ce2747\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " Jan 31 16:49:43 crc kubenswrapper[4730]: I0131 16:49:43.993782 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-config-data\") pod \"44259ad5-956e-4e78-8564-238063ce2747\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " Jan 31 16:49:43 crc kubenswrapper[4730]: I0131 16:49:43.993861 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-nova-metadata-tls-certs\") pod \"44259ad5-956e-4e78-8564-238063ce2747\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " Jan 31 16:49:43 crc kubenswrapper[4730]: I0131 16:49:43.993898 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swwp4\" (UniqueName: \"kubernetes.io/projected/44259ad5-956e-4e78-8564-238063ce2747-kube-api-access-swwp4\") pod \"44259ad5-956e-4e78-8564-238063ce2747\" (UID: \"44259ad5-956e-4e78-8564-238063ce2747\") " Jan 31 16:49:43 crc kubenswrapper[4730]: I0131 16:49:43.995577 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44259ad5-956e-4e78-8564-238063ce2747-logs" (OuterVolumeSpecName: "logs") pod "44259ad5-956e-4e78-8564-238063ce2747" (UID: "44259ad5-956e-4e78-8564-238063ce2747"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.015087 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44259ad5-956e-4e78-8564-238063ce2747-kube-api-access-swwp4" (OuterVolumeSpecName: "kube-api-access-swwp4") pod "44259ad5-956e-4e78-8564-238063ce2747" (UID: "44259ad5-956e-4e78-8564-238063ce2747"). InnerVolumeSpecName "kube-api-access-swwp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.030019 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44259ad5-956e-4e78-8564-238063ce2747" (UID: "44259ad5-956e-4e78-8564-238063ce2747"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.047847 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-config-data" (OuterVolumeSpecName: "config-data") pod "44259ad5-956e-4e78-8564-238063ce2747" (UID: "44259ad5-956e-4e78-8564-238063ce2747"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.074976 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "44259ad5-956e-4e78-8564-238063ce2747" (UID: "44259ad5-956e-4e78-8564-238063ce2747"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.097368 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44259ad5-956e-4e78-8564-238063ce2747-logs\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.097403 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.097413 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.097422 4730 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44259ad5-956e-4e78-8564-238063ce2747-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.097433 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swwp4\" (UniqueName: \"kubernetes.io/projected/44259ad5-956e-4e78-8564-238063ce2747-kube-api-access-swwp4\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.818522 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44259ad5-956e-4e78-8564-238063ce2747","Type":"ContainerDied","Data":"d7c73100a070b3a62bf07da70300013e1666706000dcc129e4eaefd5a7a11f40"} Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.819514 4730 scope.go:117] "RemoveContainer" containerID="4c7724b37a010451d6528b1a892ccda05d0a8a04c76ded9b741679f0c6a14caf" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.818560 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.845539 4730 scope.go:117] "RemoveContainer" containerID="d1ec40b4b1eefd9124c5cafaad268776776156d11f748798e62100418aec2bb7" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.863871 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.884011 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.896007 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:49:44 crc kubenswrapper[4730]: E0131 16:49:44.896440 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44259ad5-956e-4e78-8564-238063ce2747" containerName="nova-metadata-metadata" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.896456 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="44259ad5-956e-4e78-8564-238063ce2747" containerName="nova-metadata-metadata" Jan 31 16:49:44 crc kubenswrapper[4730]: E0131 16:49:44.896472 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44259ad5-956e-4e78-8564-238063ce2747" containerName="nova-metadata-log" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.896479 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="44259ad5-956e-4e78-8564-238063ce2747" containerName="nova-metadata-log" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.896684 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="44259ad5-956e-4e78-8564-238063ce2747" containerName="nova-metadata-log" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.896698 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="44259ad5-956e-4e78-8564-238063ce2747" containerName="nova-metadata-metadata" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.897668 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.901949 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.902590 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.906506 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.918918 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2df710e8-90c4-40a0-adb4-cfac0c1333cb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2df710e8-90c4-40a0-adb4-cfac0c1333cb\") " pod="openstack/nova-metadata-0" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.918958 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttpgr\" (UniqueName: \"kubernetes.io/projected/2df710e8-90c4-40a0-adb4-cfac0c1333cb-kube-api-access-ttpgr\") pod \"nova-metadata-0\" (UID: \"2df710e8-90c4-40a0-adb4-cfac0c1333cb\") " pod="openstack/nova-metadata-0" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.919003 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df710e8-90c4-40a0-adb4-cfac0c1333cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2df710e8-90c4-40a0-adb4-cfac0c1333cb\") " pod="openstack/nova-metadata-0" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.919025 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df710e8-90c4-40a0-adb4-cfac0c1333cb-config-data\") pod \"nova-metadata-0\" (UID: \"2df710e8-90c4-40a0-adb4-cfac0c1333cb\") " pod="openstack/nova-metadata-0" Jan 31 16:49:44 crc kubenswrapper[4730]: I0131 16:49:44.919059 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2df710e8-90c4-40a0-adb4-cfac0c1333cb-logs\") pod \"nova-metadata-0\" (UID: \"2df710e8-90c4-40a0-adb4-cfac0c1333cb\") " pod="openstack/nova-metadata-0" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.020462 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2df710e8-90c4-40a0-adb4-cfac0c1333cb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2df710e8-90c4-40a0-adb4-cfac0c1333cb\") " pod="openstack/nova-metadata-0" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.020648 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttpgr\" (UniqueName: \"kubernetes.io/projected/2df710e8-90c4-40a0-adb4-cfac0c1333cb-kube-api-access-ttpgr\") pod \"nova-metadata-0\" (UID: \"2df710e8-90c4-40a0-adb4-cfac0c1333cb\") " pod="openstack/nova-metadata-0" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.020682 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df710e8-90c4-40a0-adb4-cfac0c1333cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2df710e8-90c4-40a0-adb4-cfac0c1333cb\") " pod="openstack/nova-metadata-0" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.020703 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df710e8-90c4-40a0-adb4-cfac0c1333cb-config-data\") pod \"nova-metadata-0\" (UID: \"2df710e8-90c4-40a0-adb4-cfac0c1333cb\") " pod="openstack/nova-metadata-0" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.020738 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2df710e8-90c4-40a0-adb4-cfac0c1333cb-logs\") pod \"nova-metadata-0\" (UID: \"2df710e8-90c4-40a0-adb4-cfac0c1333cb\") " pod="openstack/nova-metadata-0" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.021039 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2df710e8-90c4-40a0-adb4-cfac0c1333cb-logs\") pod \"nova-metadata-0\" (UID: \"2df710e8-90c4-40a0-adb4-cfac0c1333cb\") " pod="openstack/nova-metadata-0" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.025212 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2df710e8-90c4-40a0-adb4-cfac0c1333cb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2df710e8-90c4-40a0-adb4-cfac0c1333cb\") " pod="openstack/nova-metadata-0" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.025407 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df710e8-90c4-40a0-adb4-cfac0c1333cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2df710e8-90c4-40a0-adb4-cfac0c1333cb\") " pod="openstack/nova-metadata-0" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.027829 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df710e8-90c4-40a0-adb4-cfac0c1333cb-config-data\") pod \"nova-metadata-0\" (UID: \"2df710e8-90c4-40a0-adb4-cfac0c1333cb\") " pod="openstack/nova-metadata-0" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.035520 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttpgr\" (UniqueName: \"kubernetes.io/projected/2df710e8-90c4-40a0-adb4-cfac0c1333cb-kube-api-access-ttpgr\") pod \"nova-metadata-0\" (UID: \"2df710e8-90c4-40a0-adb4-cfac0c1333cb\") " pod="openstack/nova-metadata-0" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.252544 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.764627 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.802254 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.830350 4730 generic.go:334] "Generic (PLEG): container finished" podID="8d46a81f-3e6a-4035-869e-db235995f42e" containerID="af98163e7d9109addbd7edb7aa3afdfdd8c921301fb40d7a7055faeeaa3f19b7" exitCode=0 Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.830498 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.830500 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8d46a81f-3e6a-4035-869e-db235995f42e","Type":"ContainerDied","Data":"af98163e7d9109addbd7edb7aa3afdfdd8c921301fb40d7a7055faeeaa3f19b7"} Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.830762 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8d46a81f-3e6a-4035-869e-db235995f42e","Type":"ContainerDied","Data":"a4e532cbd178d78804aacc6b700359664185487313dd34d8ded2f15e25edd2b1"} Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.830780 4730 scope.go:117] "RemoveContainer" containerID="af98163e7d9109addbd7edb7aa3afdfdd8c921301fb40d7a7055faeeaa3f19b7" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.837032 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d46a81f-3e6a-4035-869e-db235995f42e-config-data\") pod \"8d46a81f-3e6a-4035-869e-db235995f42e\" (UID: \"8d46a81f-3e6a-4035-869e-db235995f42e\") " Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.837215 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6jj7\" (UniqueName: \"kubernetes.io/projected/8d46a81f-3e6a-4035-869e-db235995f42e-kube-api-access-z6jj7\") pod \"8d46a81f-3e6a-4035-869e-db235995f42e\" (UID: \"8d46a81f-3e6a-4035-869e-db235995f42e\") " Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.837241 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d46a81f-3e6a-4035-869e-db235995f42e-combined-ca-bundle\") pod \"8d46a81f-3e6a-4035-869e-db235995f42e\" (UID: \"8d46a81f-3e6a-4035-869e-db235995f42e\") " Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.838070 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2df710e8-90c4-40a0-adb4-cfac0c1333cb","Type":"ContainerStarted","Data":"238f33d42f626c588097f31379ce64203d56f9bff1a61e840d68b44744dbdedc"} Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.843994 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d46a81f-3e6a-4035-869e-db235995f42e-kube-api-access-z6jj7" (OuterVolumeSpecName: "kube-api-access-z6jj7") pod "8d46a81f-3e6a-4035-869e-db235995f42e" (UID: "8d46a81f-3e6a-4035-869e-db235995f42e"). InnerVolumeSpecName "kube-api-access-z6jj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.852544 4730 scope.go:117] "RemoveContainer" containerID="af98163e7d9109addbd7edb7aa3afdfdd8c921301fb40d7a7055faeeaa3f19b7" Jan 31 16:49:45 crc kubenswrapper[4730]: E0131 16:49:45.853695 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af98163e7d9109addbd7edb7aa3afdfdd8c921301fb40d7a7055faeeaa3f19b7\": container with ID starting with af98163e7d9109addbd7edb7aa3afdfdd8c921301fb40d7a7055faeeaa3f19b7 not found: ID does not exist" containerID="af98163e7d9109addbd7edb7aa3afdfdd8c921301fb40d7a7055faeeaa3f19b7" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.853772 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af98163e7d9109addbd7edb7aa3afdfdd8c921301fb40d7a7055faeeaa3f19b7"} err="failed to get container status \"af98163e7d9109addbd7edb7aa3afdfdd8c921301fb40d7a7055faeeaa3f19b7\": rpc error: code = NotFound desc = could not find container \"af98163e7d9109addbd7edb7aa3afdfdd8c921301fb40d7a7055faeeaa3f19b7\": container with ID starting with af98163e7d9109addbd7edb7aa3afdfdd8c921301fb40d7a7055faeeaa3f19b7 not found: ID does not exist" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.874024 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d46a81f-3e6a-4035-869e-db235995f42e-config-data" (OuterVolumeSpecName: "config-data") pod "8d46a81f-3e6a-4035-869e-db235995f42e" (UID: "8d46a81f-3e6a-4035-869e-db235995f42e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.892264 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d46a81f-3e6a-4035-869e-db235995f42e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d46a81f-3e6a-4035-869e-db235995f42e" (UID: "8d46a81f-3e6a-4035-869e-db235995f42e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.939212 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6jj7\" (UniqueName: \"kubernetes.io/projected/8d46a81f-3e6a-4035-869e-db235995f42e-kube-api-access-z6jj7\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.939245 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d46a81f-3e6a-4035-869e-db235995f42e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:45 crc kubenswrapper[4730]: I0131 16:49:45.939256 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d46a81f-3e6a-4035-869e-db235995f42e-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.168409 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.183048 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.191751 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 16:49:46 crc kubenswrapper[4730]: E0131 16:49:46.192183 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d46a81f-3e6a-4035-869e-db235995f42e" containerName="nova-scheduler-scheduler" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.192204 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d46a81f-3e6a-4035-869e-db235995f42e" containerName="nova-scheduler-scheduler" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.192387 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d46a81f-3e6a-4035-869e-db235995f42e" containerName="nova-scheduler-scheduler" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.192990 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.200039 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.201078 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.245220 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c76d57fa-01c5-40f7-8dbb-317f6adcbcc9-config-data\") pod \"nova-scheduler-0\" (UID: \"c76d57fa-01c5-40f7-8dbb-317f6adcbcc9\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.245288 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c76d57fa-01c5-40f7-8dbb-317f6adcbcc9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c76d57fa-01c5-40f7-8dbb-317f6adcbcc9\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.245325 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wfg2\" (UniqueName: \"kubernetes.io/projected/c76d57fa-01c5-40f7-8dbb-317f6adcbcc9-kube-api-access-2wfg2\") pod \"nova-scheduler-0\" (UID: \"c76d57fa-01c5-40f7-8dbb-317f6adcbcc9\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.346730 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c76d57fa-01c5-40f7-8dbb-317f6adcbcc9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c76d57fa-01c5-40f7-8dbb-317f6adcbcc9\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.347451 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wfg2\" (UniqueName: \"kubernetes.io/projected/c76d57fa-01c5-40f7-8dbb-317f6adcbcc9-kube-api-access-2wfg2\") pod \"nova-scheduler-0\" (UID: \"c76d57fa-01c5-40f7-8dbb-317f6adcbcc9\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.347606 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c76d57fa-01c5-40f7-8dbb-317f6adcbcc9-config-data\") pod \"nova-scheduler-0\" (UID: \"c76d57fa-01c5-40f7-8dbb-317f6adcbcc9\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.350442 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c76d57fa-01c5-40f7-8dbb-317f6adcbcc9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c76d57fa-01c5-40f7-8dbb-317f6adcbcc9\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.350850 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c76d57fa-01c5-40f7-8dbb-317f6adcbcc9-config-data\") pod \"nova-scheduler-0\" (UID: \"c76d57fa-01c5-40f7-8dbb-317f6adcbcc9\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.375304 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wfg2\" (UniqueName: \"kubernetes.io/projected/c76d57fa-01c5-40f7-8dbb-317f6adcbcc9-kube-api-access-2wfg2\") pod \"nova-scheduler-0\" (UID: \"c76d57fa-01c5-40f7-8dbb-317f6adcbcc9\") " pod="openstack/nova-scheduler-0" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.474054 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44259ad5-956e-4e78-8564-238063ce2747" path="/var/lib/kubelet/pods/44259ad5-956e-4e78-8564-238063ce2747/volumes" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.474622 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d46a81f-3e6a-4035-869e-db235995f42e" path="/var/lib/kubelet/pods/8d46a81f-3e6a-4035-869e-db235995f42e/volumes" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.513628 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.857061 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2df710e8-90c4-40a0-adb4-cfac0c1333cb","Type":"ContainerStarted","Data":"15b73f7bd90c2b649df22dfa11520d87cf105218c3692d37901ef479c7c541c5"} Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.857332 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2df710e8-90c4-40a0-adb4-cfac0c1333cb","Type":"ContainerStarted","Data":"048f107a8594bc187787b8116e7fc008a5a13235c50e3b72ad7a66fe1bf93403"} Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.874650 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.874632669 podStartE2EDuration="2.874632669s" podCreationTimestamp="2026-01-31 16:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:49:46.871648638 +0000 UTC m=+1173.677705554" watchObservedRunningTime="2026-01-31 16:49:46.874632669 +0000 UTC m=+1173.680689585" Jan 31 16:49:46 crc kubenswrapper[4730]: I0131 16:49:46.955993 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 16:49:47 crc kubenswrapper[4730]: I0131 16:49:47.869619 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c76d57fa-01c5-40f7-8dbb-317f6adcbcc9","Type":"ContainerStarted","Data":"65796abe15871edc9ee480a1abe03cc5b6346293b70d45acc14e06ed9a25bac3"} Jan 31 16:49:47 crc kubenswrapper[4730]: I0131 16:49:47.869927 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c76d57fa-01c5-40f7-8dbb-317f6adcbcc9","Type":"ContainerStarted","Data":"29a9a8781a2d529b006bef455723b1375e306b8c616dbc4f616c799914ba1381"} Jan 31 16:49:47 crc kubenswrapper[4730]: I0131 16:49:47.904710 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.904682399 podStartE2EDuration="1.904682399s" podCreationTimestamp="2026-01-31 16:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:49:47.897719038 +0000 UTC m=+1174.703775954" watchObservedRunningTime="2026-01-31 16:49:47.904682399 +0000 UTC m=+1174.710739315" Jan 31 16:49:49 crc kubenswrapper[4730]: I0131 16:49:49.464339 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:49:49 crc kubenswrapper[4730]: I0131 16:49:49.464656 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:49:49 crc kubenswrapper[4730]: I0131 16:49:49.464777 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:49:49 crc kubenswrapper[4730]: E0131 16:49:49.465048 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:49:50 crc kubenswrapper[4730]: I0131 16:49:50.253616 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 16:49:50 crc kubenswrapper[4730]: I0131 16:49:50.254210 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 16:49:51 crc kubenswrapper[4730]: I0131 16:49:51.514720 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 16:49:52 crc kubenswrapper[4730]: I0131 16:49:52.147757 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 16:49:52 crc kubenswrapper[4730]: I0131 16:49:52.148040 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 16:49:52 crc kubenswrapper[4730]: I0131 16:49:52.464152 4730 scope.go:117] "RemoveContainer" containerID="525d48ecc22cafce03d5c202b93966613fb4e59536345f3299c3c3aec9effd0f" Jan 31 16:49:52 crc kubenswrapper[4730]: I0131 16:49:52.464191 4730 scope.go:117] "RemoveContainer" containerID="565c7bd9106aad9d86ce94e5f961be95c0e35c7214bd841b8cd05f550145a58b" Jan 31 16:49:52 crc kubenswrapper[4730]: E0131 16:49:52.464463 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:49:53 crc kubenswrapper[4730]: I0131 16:49:53.159943 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="63a7e1f3-1bc8-429e-a94c-729bc81d12ac" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.210:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 16:49:53 crc kubenswrapper[4730]: I0131 16:49:53.160129 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="63a7e1f3-1bc8-429e-a94c-729bc81d12ac" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.210:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 16:49:55 crc kubenswrapper[4730]: I0131 16:49:55.253033 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 16:49:55 crc kubenswrapper[4730]: I0131 16:49:55.253352 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 16:49:56 crc kubenswrapper[4730]: I0131 16:49:56.268036 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2df710e8-90c4-40a0-adb4-cfac0c1333cb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 16:49:56 crc kubenswrapper[4730]: I0131 16:49:56.268073 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2df710e8-90c4-40a0-adb4-cfac0c1333cb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 16:49:56 crc kubenswrapper[4730]: I0131 16:49:56.514356 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 16:49:56 crc kubenswrapper[4730]: I0131 16:49:56.540927 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 16:49:56 crc kubenswrapper[4730]: I0131 16:49:56.977594 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 16:49:58 crc kubenswrapper[4730]: I0131 16:49:58.228521 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 16:50:02 crc kubenswrapper[4730]: I0131 16:50:02.160932 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 16:50:02 crc kubenswrapper[4730]: I0131 16:50:02.161845 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 16:50:02 crc kubenswrapper[4730]: I0131 16:50:02.162627 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 16:50:02 crc kubenswrapper[4730]: I0131 16:50:02.172470 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 16:50:02 crc kubenswrapper[4730]: I0131 16:50:02.464917 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:50:02 crc kubenswrapper[4730]: I0131 16:50:02.465008 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:50:02 crc kubenswrapper[4730]: I0131 16:50:02.465145 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:50:02 crc kubenswrapper[4730]: E0131 16:50:02.465493 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:50:03 crc kubenswrapper[4730]: I0131 16:50:03.010742 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 16:50:03 crc kubenswrapper[4730]: I0131 16:50:03.017926 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 16:50:05 crc kubenswrapper[4730]: I0131 16:50:05.259411 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 16:50:05 crc kubenswrapper[4730]: I0131 16:50:05.261355 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 16:50:05 crc kubenswrapper[4730]: I0131 16:50:05.266325 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 16:50:06 crc kubenswrapper[4730]: I0131 16:50:06.054553 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 16:50:07 crc kubenswrapper[4730]: I0131 16:50:07.465524 4730 scope.go:117] "RemoveContainer" containerID="525d48ecc22cafce03d5c202b93966613fb4e59536345f3299c3c3aec9effd0f" Jan 31 16:50:07 crc kubenswrapper[4730]: I0131 16:50:07.465886 4730 scope.go:117] "RemoveContainer" containerID="565c7bd9106aad9d86ce94e5f961be95c0e35c7214bd841b8cd05f550145a58b" Jan 31 16:50:07 crc kubenswrapper[4730]: E0131 16:50:07.466198 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:50:13 crc kubenswrapper[4730]: I0131 16:50:13.465546 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:50:13 crc kubenswrapper[4730]: I0131 16:50:13.466336 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:50:13 crc kubenswrapper[4730]: I0131 16:50:13.466519 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:50:13 crc kubenswrapper[4730]: E0131 16:50:13.467150 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:50:15 crc kubenswrapper[4730]: I0131 16:50:15.133377 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="57f18dcfb7530a415b40c27dcda7694fcabb603d09c2b77a985646d961881789" exitCode=1 Jan 31 16:50:15 crc kubenswrapper[4730]: I0131 16:50:15.133524 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"57f18dcfb7530a415b40c27dcda7694fcabb603d09c2b77a985646d961881789"} Jan 31 16:50:15 crc kubenswrapper[4730]: I0131 16:50:15.134371 4730 scope.go:117] "RemoveContainer" containerID="ee85bc5fc59c3f0b6790a01a8bec9adde51e9224843a4dc959082405198dc125" Jan 31 16:50:15 crc kubenswrapper[4730]: I0131 16:50:15.135073 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:50:15 crc kubenswrapper[4730]: I0131 16:50:15.135133 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:50:15 crc kubenswrapper[4730]: I0131 16:50:15.135153 4730 scope.go:117] "RemoveContainer" containerID="57f18dcfb7530a415b40c27dcda7694fcabb603d09c2b77a985646d961881789" Jan 31 16:50:15 crc kubenswrapper[4730]: I0131 16:50:15.135226 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:50:15 crc kubenswrapper[4730]: E0131 16:50:15.135504 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:50:17 crc kubenswrapper[4730]: I0131 16:50:17.173168 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="d726a69f5e2dfff30e76809ee957e2e6becec83d862908ee3262df8ae2b25070" exitCode=1 Jan 31 16:50:17 crc kubenswrapper[4730]: I0131 16:50:17.173234 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"d726a69f5e2dfff30e76809ee957e2e6becec83d862908ee3262df8ae2b25070"} Jan 31 16:50:17 crc kubenswrapper[4730]: I0131 16:50:17.173495 4730 scope.go:117] "RemoveContainer" containerID="1f3360e1f421204b7af9c6c32dc9ed3f548543f1cce4369ddb18b1d85fdb6ad2" Jan 31 16:50:17 crc kubenswrapper[4730]: I0131 16:50:17.174699 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:50:17 crc kubenswrapper[4730]: I0131 16:50:17.174881 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:50:17 crc kubenswrapper[4730]: I0131 16:50:17.174922 4730 scope.go:117] "RemoveContainer" containerID="57f18dcfb7530a415b40c27dcda7694fcabb603d09c2b77a985646d961881789" Jan 31 16:50:17 crc kubenswrapper[4730]: I0131 16:50:17.175004 4730 scope.go:117] "RemoveContainer" containerID="d726a69f5e2dfff30e76809ee957e2e6becec83d862908ee3262df8ae2b25070" Jan 31 16:50:17 crc kubenswrapper[4730]: I0131 16:50:17.175031 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:50:17 crc kubenswrapper[4730]: E0131 16:50:17.175913 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:50:21 crc kubenswrapper[4730]: I0131 16:50:21.464394 4730 scope.go:117] "RemoveContainer" containerID="525d48ecc22cafce03d5c202b93966613fb4e59536345f3299c3c3aec9effd0f" Jan 31 16:50:21 crc kubenswrapper[4730]: I0131 16:50:21.466113 4730 scope.go:117] "RemoveContainer" containerID="565c7bd9106aad9d86ce94e5f961be95c0e35c7214bd841b8cd05f550145a58b" Jan 31 16:50:21 crc kubenswrapper[4730]: E0131 16:50:21.466595 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:50:26 crc kubenswrapper[4730]: I0131 16:50:26.975012 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:50:26 crc kubenswrapper[4730]: I0131 16:50:26.976633 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:50:28 crc kubenswrapper[4730]: I0131 16:50:28.465899 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:50:28 crc kubenswrapper[4730]: I0131 16:50:28.466325 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:50:28 crc kubenswrapper[4730]: I0131 16:50:28.466357 4730 scope.go:117] "RemoveContainer" containerID="57f18dcfb7530a415b40c27dcda7694fcabb603d09c2b77a985646d961881789" Jan 31 16:50:28 crc kubenswrapper[4730]: I0131 16:50:28.466423 4730 scope.go:117] "RemoveContainer" containerID="d726a69f5e2dfff30e76809ee957e2e6becec83d862908ee3262df8ae2b25070" Jan 31 16:50:28 crc kubenswrapper[4730]: I0131 16:50:28.466432 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:50:28 crc kubenswrapper[4730]: E0131 16:50:28.676991 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:50:29 crc kubenswrapper[4730]: I0131 16:50:29.355231 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"7141b3c96e8593876e504fdd0590a5d814ff71c190eba021e3cd88de170efd1f"} Jan 31 16:50:29 crc kubenswrapper[4730]: I0131 16:50:29.357156 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:50:29 crc kubenswrapper[4730]: I0131 16:50:29.357282 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:50:29 crc kubenswrapper[4730]: I0131 16:50:29.357432 4730 scope.go:117] "RemoveContainer" containerID="d726a69f5e2dfff30e76809ee957e2e6becec83d862908ee3262df8ae2b25070" Jan 31 16:50:29 crc kubenswrapper[4730]: I0131 16:50:29.357453 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:50:29 crc kubenswrapper[4730]: E0131 16:50:29.362648 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:50:35 crc kubenswrapper[4730]: I0131 16:50:35.464419 4730 scope.go:117] "RemoveContainer" containerID="525d48ecc22cafce03d5c202b93966613fb4e59536345f3299c3c3aec9effd0f" Jan 31 16:50:35 crc kubenswrapper[4730]: I0131 16:50:35.465114 4730 scope.go:117] "RemoveContainer" containerID="565c7bd9106aad9d86ce94e5f961be95c0e35c7214bd841b8cd05f550145a58b" Jan 31 16:50:35 crc kubenswrapper[4730]: E0131 16:50:35.465715 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:50:41 crc kubenswrapper[4730]: I0131 16:50:41.465033 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:50:41 crc kubenswrapper[4730]: I0131 16:50:41.465886 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:50:41 crc kubenswrapper[4730]: I0131 16:50:41.466042 4730 scope.go:117] "RemoveContainer" containerID="d726a69f5e2dfff30e76809ee957e2e6becec83d862908ee3262df8ae2b25070" Jan 31 16:50:41 crc kubenswrapper[4730]: I0131 16:50:41.466055 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:50:41 crc kubenswrapper[4730]: E0131 16:50:41.717703 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:50:42 crc kubenswrapper[4730]: I0131 16:50:42.541592 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"e9d17bc84e5c33abac51bdcbf8cdd0b82908473492cf9c368c01f850003a6595"} Jan 31 16:50:42 crc kubenswrapper[4730]: I0131 16:50:42.542967 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:50:42 crc kubenswrapper[4730]: I0131 16:50:42.543047 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:50:42 crc kubenswrapper[4730]: I0131 16:50:42.543164 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:50:42 crc kubenswrapper[4730]: E0131 16:50:42.543724 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:50:48 crc kubenswrapper[4730]: I0131 16:50:48.465659 4730 scope.go:117] "RemoveContainer" containerID="525d48ecc22cafce03d5c202b93966613fb4e59536345f3299c3c3aec9effd0f" Jan 31 16:50:48 crc kubenswrapper[4730]: I0131 16:50:48.466563 4730 scope.go:117] "RemoveContainer" containerID="565c7bd9106aad9d86ce94e5f961be95c0e35c7214bd841b8cd05f550145a58b" Jan 31 16:50:49 crc kubenswrapper[4730]: I0131 16:50:49.619746 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809"} Jan 31 16:50:49 crc kubenswrapper[4730]: I0131 16:50:49.620749 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7"} Jan 31 16:50:49 crc kubenswrapper[4730]: I0131 16:50:49.621388 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:50:49 crc kubenswrapper[4730]: I0131 16:50:49.621452 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:50:50 crc kubenswrapper[4730]: I0131 16:50:50.634034 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" exitCode=1 Jan 31 16:50:50 crc kubenswrapper[4730]: I0131 16:50:50.634278 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809"} Jan 31 16:50:50 crc kubenswrapper[4730]: I0131 16:50:50.634525 4730 scope.go:117] "RemoveContainer" containerID="565c7bd9106aad9d86ce94e5f961be95c0e35c7214bd841b8cd05f550145a58b" Jan 31 16:50:50 crc kubenswrapper[4730]: I0131 16:50:50.635495 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:50:50 crc kubenswrapper[4730]: E0131 16:50:50.635772 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:50:51 crc kubenswrapper[4730]: I0131 16:50:51.648640 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:50:51 crc kubenswrapper[4730]: E0131 16:50:51.649346 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:50:51 crc kubenswrapper[4730]: I0131 16:50:51.653212 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:50:52 crc kubenswrapper[4730]: I0131 16:50:52.658791 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:50:52 crc kubenswrapper[4730]: E0131 16:50:52.659514 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:50:54 crc kubenswrapper[4730]: I0131 16:50:54.478643 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:50:54 crc kubenswrapper[4730]: I0131 16:50:54.478726 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:50:54 crc kubenswrapper[4730]: I0131 16:50:54.478862 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:50:54 crc kubenswrapper[4730]: E0131 16:50:54.479250 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:50:54 crc kubenswrapper[4730]: I0131 16:50:54.660774 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:50:55 crc kubenswrapper[4730]: I0131 16:50:55.660237 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:50:56 crc kubenswrapper[4730]: I0131 16:50:56.975324 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:50:56 crc kubenswrapper[4730]: I0131 16:50:56.975430 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:50:57 crc kubenswrapper[4730]: I0131 16:50:57.661300 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:51:00 crc kubenswrapper[4730]: I0131 16:51:00.658693 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:51:00 crc kubenswrapper[4730]: I0131 16:51:00.661351 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:51:00 crc kubenswrapper[4730]: I0131 16:51:00.661405 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:51:00 crc kubenswrapper[4730]: I0131 16:51:00.662723 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 16:51:00 crc kubenswrapper[4730]: I0131 16:51:00.662758 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:51:00 crc kubenswrapper[4730]: I0131 16:51:00.662837 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" gracePeriod=30 Jan 31 16:51:00 crc kubenswrapper[4730]: I0131 16:51:00.666255 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:51:00 crc kubenswrapper[4730]: E0131 16:51:00.791599 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:51:01 crc kubenswrapper[4730]: I0131 16:51:01.753733 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" exitCode=0 Jan 31 16:51:01 crc kubenswrapper[4730]: I0131 16:51:01.753794 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7"} Jan 31 16:51:01 crc kubenswrapper[4730]: I0131 16:51:01.753884 4730 scope.go:117] "RemoveContainer" containerID="525d48ecc22cafce03d5c202b93966613fb4e59536345f3299c3c3aec9effd0f" Jan 31 16:51:01 crc kubenswrapper[4730]: I0131 16:51:01.754621 4730 scope.go:117] "RemoveContainer" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" Jan 31 16:51:01 crc kubenswrapper[4730]: I0131 16:51:01.754682 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:51:01 crc kubenswrapper[4730]: E0131 16:51:01.755170 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:51:08 crc kubenswrapper[4730]: I0131 16:51:08.465979 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:51:08 crc kubenswrapper[4730]: I0131 16:51:08.466697 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:51:08 crc kubenswrapper[4730]: I0131 16:51:08.466905 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:51:08 crc kubenswrapper[4730]: E0131 16:51:08.467460 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:51:15 crc kubenswrapper[4730]: E0131 16:51:15.323792 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 16:51:15 crc kubenswrapper[4730]: I0131 16:51:15.933604 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:51:16 crc kubenswrapper[4730]: I0131 16:51:16.465152 4730 scope.go:117] "RemoveContainer" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" Jan 31 16:51:16 crc kubenswrapper[4730]: I0131 16:51:16.465180 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:51:16 crc kubenswrapper[4730]: E0131 16:51:16.465438 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:51:17 crc kubenswrapper[4730]: I0131 16:51:17.004537 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:51:17 crc kubenswrapper[4730]: E0131 16:51:17.004673 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 16:51:17 crc kubenswrapper[4730]: E0131 16:51:17.005249 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 16:53:19.005232953 +0000 UTC m=+1385.811289869 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 16:51:20 crc kubenswrapper[4730]: I0131 16:51:20.465851 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:51:20 crc kubenswrapper[4730]: I0131 16:51:20.466436 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:51:20 crc kubenswrapper[4730]: I0131 16:51:20.466715 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:51:20 crc kubenswrapper[4730]: E0131 16:51:20.467562 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:51:26 crc kubenswrapper[4730]: I0131 16:51:26.974882 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:51:26 crc kubenswrapper[4730]: I0131 16:51:26.975641 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:51:26 crc kubenswrapper[4730]: I0131 16:51:26.975700 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:51:26 crc kubenswrapper[4730]: I0131 16:51:26.976241 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21bc1c0d1795b476dc0a7f952823b035db816e9829905fa6afc3669ea169eecc"} pod="openshift-machine-config-operator/machine-config-daemon-mzg47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 16:51:26 crc kubenswrapper[4730]: I0131 16:51:26.976296 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" containerID="cri-o://21bc1c0d1795b476dc0a7f952823b035db816e9829905fa6afc3669ea169eecc" gracePeriod=600 Jan 31 16:51:28 crc kubenswrapper[4730]: I0131 16:51:28.058917 4730 generic.go:334] "Generic (PLEG): container finished" podID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerID="21bc1c0d1795b476dc0a7f952823b035db816e9829905fa6afc3669ea169eecc" exitCode=0 Jan 31 16:51:28 crc kubenswrapper[4730]: I0131 16:51:28.059264 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerDied","Data":"21bc1c0d1795b476dc0a7f952823b035db816e9829905fa6afc3669ea169eecc"} Jan 31 16:51:28 crc kubenswrapper[4730]: I0131 16:51:28.059298 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerStarted","Data":"43b7bb63726524ca697f41266f3bd99562b62d62470c4a1e15fd3ef35c3d68d2"} Jan 31 16:51:28 crc kubenswrapper[4730]: I0131 16:51:28.059317 4730 scope.go:117] "RemoveContainer" containerID="9edfe6ca891dac90613c7fe072627dce26dbef80751209cf3e40ccba97010f80" Jan 31 16:51:31 crc kubenswrapper[4730]: I0131 16:51:31.464334 4730 scope.go:117] "RemoveContainer" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" Jan 31 16:51:31 crc kubenswrapper[4730]: I0131 16:51:31.464630 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:51:31 crc kubenswrapper[4730]: E0131 16:51:31.464957 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:51:32 crc kubenswrapper[4730]: I0131 16:51:32.465278 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:51:32 crc kubenswrapper[4730]: I0131 16:51:32.465870 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:51:32 crc kubenswrapper[4730]: I0131 16:51:32.466050 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:51:32 crc kubenswrapper[4730]: E0131 16:51:32.466691 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:51:34 crc kubenswrapper[4730]: I0131 16:51:34.139281 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="e9d17bc84e5c33abac51bdcbf8cdd0b82908473492cf9c368c01f850003a6595" exitCode=1 Jan 31 16:51:34 crc kubenswrapper[4730]: I0131 16:51:34.139357 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"e9d17bc84e5c33abac51bdcbf8cdd0b82908473492cf9c368c01f850003a6595"} Jan 31 16:51:34 crc kubenswrapper[4730]: I0131 16:51:34.139626 4730 scope.go:117] "RemoveContainer" containerID="d726a69f5e2dfff30e76809ee957e2e6becec83d862908ee3262df8ae2b25070" Jan 31 16:51:34 crc kubenswrapper[4730]: I0131 16:51:34.140999 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:51:34 crc kubenswrapper[4730]: I0131 16:51:34.141117 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:51:34 crc kubenswrapper[4730]: I0131 16:51:34.141267 4730 scope.go:117] "RemoveContainer" containerID="e9d17bc84e5c33abac51bdcbf8cdd0b82908473492cf9c368c01f850003a6595" Jan 31 16:51:34 crc kubenswrapper[4730]: I0131 16:51:34.141305 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:51:34 crc kubenswrapper[4730]: E0131 16:51:34.141985 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:51:39 crc kubenswrapper[4730]: I0131 16:51:39.186433 4730 scope.go:117] "RemoveContainer" containerID="3abc831078ca0909ba2a0cc107f5b02749686c97c3a76725bf9d5dd930b49582" Jan 31 16:51:39 crc kubenswrapper[4730]: I0131 16:51:39.231206 4730 scope.go:117] "RemoveContainer" containerID="4a7632f37124c859c197ad647098e2a83a4abbf9eda430770abc4c6188d37eeb" Jan 31 16:51:39 crc kubenswrapper[4730]: I0131 16:51:39.312702 4730 scope.go:117] "RemoveContainer" containerID="085f5a53443c1d1c759ba38149fcc00cd96b2894963f317fd1038c518df3cdc2" Jan 31 16:51:43 crc kubenswrapper[4730]: I0131 16:51:43.464119 4730 scope.go:117] "RemoveContainer" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" Jan 31 16:51:43 crc kubenswrapper[4730]: I0131 16:51:43.464345 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:51:43 crc kubenswrapper[4730]: E0131 16:51:43.464689 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:51:48 crc kubenswrapper[4730]: I0131 16:51:48.466297 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:51:48 crc kubenswrapper[4730]: I0131 16:51:48.467257 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:51:48 crc kubenswrapper[4730]: I0131 16:51:48.467416 4730 scope.go:117] "RemoveContainer" containerID="e9d17bc84e5c33abac51bdcbf8cdd0b82908473492cf9c368c01f850003a6595" Jan 31 16:51:48 crc kubenswrapper[4730]: I0131 16:51:48.467431 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:51:49 crc kubenswrapper[4730]: E0131 16:51:49.111683 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:51:49 crc kubenswrapper[4730]: I0131 16:51:49.329320 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" exitCode=1 Jan 31 16:51:49 crc kubenswrapper[4730]: I0131 16:51:49.329368 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071"} Jan 31 16:51:49 crc kubenswrapper[4730]: I0131 16:51:49.329398 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49"} Jan 31 16:51:49 crc kubenswrapper[4730]: I0131 16:51:49.329411 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc"} Jan 31 16:51:49 crc kubenswrapper[4730]: I0131 16:51:49.329431 4730 scope.go:117] "RemoveContainer" containerID="fb410b8955870c9dab15f1e82a57cc6ce346ca63ff43ac1d271c2378938fbfcc" Jan 31 16:51:49 crc kubenswrapper[4730]: I0131 16:51:49.330363 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:51:49 crc kubenswrapper[4730]: I0131 16:51:49.330458 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:51:49 crc kubenswrapper[4730]: I0131 16:51:49.330631 4730 scope.go:117] "RemoveContainer" containerID="e9d17bc84e5c33abac51bdcbf8cdd0b82908473492cf9c368c01f850003a6595" Jan 31 16:51:49 crc kubenswrapper[4730]: E0131 16:51:49.331173 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:51:50 crc kubenswrapper[4730]: I0131 16:51:50.346453 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" exitCode=1 Jan 31 16:51:50 crc kubenswrapper[4730]: I0131 16:51:50.346484 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" exitCode=1 Jan 31 16:51:50 crc kubenswrapper[4730]: I0131 16:51:50.346501 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071"} Jan 31 16:51:50 crc kubenswrapper[4730]: I0131 16:51:50.346527 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49"} Jan 31 16:51:50 crc kubenswrapper[4730]: I0131 16:51:50.346542 4730 scope.go:117] "RemoveContainer" containerID="c8738c7d84495fbb70559d36c44f34cdef9b216df00ef6438e73f3748fa17756" Jan 31 16:51:50 crc kubenswrapper[4730]: I0131 16:51:50.347420 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:51:50 crc kubenswrapper[4730]: I0131 16:51:50.347494 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:51:50 crc kubenswrapper[4730]: I0131 16:51:50.347586 4730 scope.go:117] "RemoveContainer" containerID="e9d17bc84e5c33abac51bdcbf8cdd0b82908473492cf9c368c01f850003a6595" Jan 31 16:51:50 crc kubenswrapper[4730]: I0131 16:51:50.347598 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:51:50 crc kubenswrapper[4730]: E0131 16:51:50.348053 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:51:50 crc kubenswrapper[4730]: I0131 16:51:50.430783 4730 scope.go:117] "RemoveContainer" containerID="107141f107df56ad96215d954764ad78396bc8cee042cc8a7e0914f586042dfd" Jan 31 16:51:51 crc kubenswrapper[4730]: I0131 16:51:51.360158 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:51:51 crc kubenswrapper[4730]: I0131 16:51:51.360243 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:51:51 crc kubenswrapper[4730]: I0131 16:51:51.360349 4730 scope.go:117] "RemoveContainer" containerID="e9d17bc84e5c33abac51bdcbf8cdd0b82908473492cf9c368c01f850003a6595" Jan 31 16:51:51 crc kubenswrapper[4730]: I0131 16:51:51.360359 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:51:51 crc kubenswrapper[4730]: E0131 16:51:51.360733 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:51:57 crc kubenswrapper[4730]: I0131 16:51:57.463911 4730 scope.go:117] "RemoveContainer" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" Jan 31 16:51:57 crc kubenswrapper[4730]: I0131 16:51:57.464449 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:51:57 crc kubenswrapper[4730]: E0131 16:51:57.464695 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:52:06 crc kubenswrapper[4730]: I0131 16:52:06.469579 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:52:06 crc kubenswrapper[4730]: I0131 16:52:06.470063 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:52:06 crc kubenswrapper[4730]: I0131 16:52:06.470138 4730 scope.go:117] "RemoveContainer" containerID="e9d17bc84e5c33abac51bdcbf8cdd0b82908473492cf9c368c01f850003a6595" Jan 31 16:52:06 crc kubenswrapper[4730]: I0131 16:52:06.470146 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:52:06 crc kubenswrapper[4730]: E0131 16:52:06.470421 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:52:12 crc kubenswrapper[4730]: I0131 16:52:12.464610 4730 scope.go:117] "RemoveContainer" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" Jan 31 16:52:12 crc kubenswrapper[4730]: I0131 16:52:12.465325 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:52:12 crc kubenswrapper[4730]: E0131 16:52:12.465718 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:52:19 crc kubenswrapper[4730]: I0131 16:52:19.464164 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:52:19 crc kubenswrapper[4730]: I0131 16:52:19.464818 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:52:19 crc kubenswrapper[4730]: I0131 16:52:19.464894 4730 scope.go:117] "RemoveContainer" containerID="e9d17bc84e5c33abac51bdcbf8cdd0b82908473492cf9c368c01f850003a6595" Jan 31 16:52:19 crc kubenswrapper[4730]: I0131 16:52:19.464901 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:52:19 crc kubenswrapper[4730]: E0131 16:52:19.663382 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:52:19 crc kubenswrapper[4730]: I0131 16:52:19.685347 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"76cd14f75be0a2e7271e97c2e84874497a20bad6efb9697ecd4ecf25b2af12cd"} Jan 31 16:52:19 crc kubenswrapper[4730]: I0131 16:52:19.686313 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:52:19 crc kubenswrapper[4730]: I0131 16:52:19.686386 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:52:19 crc kubenswrapper[4730]: I0131 16:52:19.686502 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:52:19 crc kubenswrapper[4730]: E0131 16:52:19.686939 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:52:23 crc kubenswrapper[4730]: I0131 16:52:23.464429 4730 scope.go:117] "RemoveContainer" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" Jan 31 16:52:23 crc kubenswrapper[4730]: I0131 16:52:23.464738 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:52:23 crc kubenswrapper[4730]: E0131 16:52:23.465078 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:52:33 crc kubenswrapper[4730]: I0131 16:52:33.464854 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:52:33 crc kubenswrapper[4730]: I0131 16:52:33.465467 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:52:33 crc kubenswrapper[4730]: I0131 16:52:33.465592 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:52:33 crc kubenswrapper[4730]: E0131 16:52:33.465979 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:52:35 crc kubenswrapper[4730]: I0131 16:52:35.464069 4730 scope.go:117] "RemoveContainer" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" Jan 31 16:52:35 crc kubenswrapper[4730]: I0131 16:52:35.464112 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:52:35 crc kubenswrapper[4730]: E0131 16:52:35.464494 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:52:39 crc kubenswrapper[4730]: I0131 16:52:39.432687 4730 scope.go:117] "RemoveContainer" containerID="8ab434ed6c460a0441f280ba8e6c81a3b4d8478e9ee9f29f20f740e872a262ef" Jan 31 16:52:46 crc kubenswrapper[4730]: I0131 16:52:46.464627 4730 scope.go:117] "RemoveContainer" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" Jan 31 16:52:46 crc kubenswrapper[4730]: I0131 16:52:46.465283 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:52:46 crc kubenswrapper[4730]: E0131 16:52:46.465711 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:52:47 crc kubenswrapper[4730]: I0131 16:52:47.464891 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:52:47 crc kubenswrapper[4730]: I0131 16:52:47.465369 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:52:47 crc kubenswrapper[4730]: I0131 16:52:47.465575 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:52:47 crc kubenswrapper[4730]: E0131 16:52:47.466146 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:52:58 crc kubenswrapper[4730]: I0131 16:52:58.464458 4730 scope.go:117] "RemoveContainer" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" Jan 31 16:52:58 crc kubenswrapper[4730]: I0131 16:52:58.465036 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:52:58 crc kubenswrapper[4730]: E0131 16:52:58.465457 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:53:02 crc kubenswrapper[4730]: I0131 16:53:02.466181 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:53:02 crc kubenswrapper[4730]: I0131 16:53:02.466893 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:53:02 crc kubenswrapper[4730]: I0131 16:53:02.467091 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:53:02 crc kubenswrapper[4730]: E0131 16:53:02.467680 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:53:09 crc kubenswrapper[4730]: I0131 16:53:09.464937 4730 scope.go:117] "RemoveContainer" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" Jan 31 16:53:09 crc kubenswrapper[4730]: I0131 16:53:09.465430 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:53:09 crc kubenswrapper[4730]: E0131 16:53:09.465735 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:53:18 crc kubenswrapper[4730]: I0131 16:53:18.466724 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:53:18 crc kubenswrapper[4730]: I0131 16:53:18.467650 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:53:18 crc kubenswrapper[4730]: I0131 16:53:18.467869 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:53:18 crc kubenswrapper[4730]: E0131 16:53:18.468406 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:53:18 crc kubenswrapper[4730]: E0131 16:53:18.935006 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 16:53:19 crc kubenswrapper[4730]: I0131 16:53:19.046570 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:53:19 crc kubenswrapper[4730]: E0131 16:53:19.046746 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 16:53:19 crc kubenswrapper[4730]: E0131 16:53:19.046925 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 16:55:21.046869766 +0000 UTC m=+1507.852926722 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 16:53:19 crc kubenswrapper[4730]: I0131 16:53:19.857064 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:53:24 crc kubenswrapper[4730]: I0131 16:53:24.477078 4730 scope.go:117] "RemoveContainer" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" Jan 31 16:53:24 crc kubenswrapper[4730]: I0131 16:53:24.478036 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:53:24 crc kubenswrapper[4730]: E0131 16:53:24.478843 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:53:30 crc kubenswrapper[4730]: I0131 16:53:30.464278 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:53:30 crc kubenswrapper[4730]: I0131 16:53:30.464764 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:53:30 crc kubenswrapper[4730]: I0131 16:53:30.464866 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:53:30 crc kubenswrapper[4730]: E0131 16:53:30.465114 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:53:38 crc kubenswrapper[4730]: I0131 16:53:38.465297 4730 scope.go:117] "RemoveContainer" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" Jan 31 16:53:38 crc kubenswrapper[4730]: I0131 16:53:38.465943 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:53:38 crc kubenswrapper[4730]: E0131 16:53:38.674646 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:53:39 crc kubenswrapper[4730]: I0131 16:53:39.059933 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383"} Jan 31 16:53:39 crc kubenswrapper[4730]: I0131 16:53:39.061118 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:53:39 crc kubenswrapper[4730]: I0131 16:53:39.061751 4730 scope.go:117] "RemoveContainer" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" Jan 31 16:53:39 crc kubenswrapper[4730]: E0131 16:53:39.062134 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:53:40 crc kubenswrapper[4730]: I0131 16:53:40.076926 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" exitCode=1 Jan 31 16:53:40 crc kubenswrapper[4730]: I0131 16:53:40.077050 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383"} Jan 31 16:53:40 crc kubenswrapper[4730]: I0131 16:53:40.077390 4730 scope.go:117] "RemoveContainer" containerID="331e70e4b7859d0baf3a9f571cb438cdd2c942c230379c36403a42c39b1b5809" Jan 31 16:53:40 crc kubenswrapper[4730]: I0131 16:53:40.077932 4730 scope.go:117] "RemoveContainer" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" Jan 31 16:53:40 crc kubenswrapper[4730]: I0131 16:53:40.077966 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:53:40 crc kubenswrapper[4730]: E0131 16:53:40.078553 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:53:41 crc kubenswrapper[4730]: I0131 16:53:41.100741 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"76cd14f75be0a2e7271e97c2e84874497a20bad6efb9697ecd4ecf25b2af12cd"} Jan 31 16:53:41 crc kubenswrapper[4730]: I0131 16:53:41.101050 4730 scope.go:117] "RemoveContainer" containerID="e9d17bc84e5c33abac51bdcbf8cdd0b82908473492cf9c368c01f850003a6595" Jan 31 16:53:41 crc kubenswrapper[4730]: I0131 16:53:41.100695 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="76cd14f75be0a2e7271e97c2e84874497a20bad6efb9697ecd4ecf25b2af12cd" exitCode=1 Jan 31 16:53:41 crc kubenswrapper[4730]: I0131 16:53:41.102585 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:53:41 crc kubenswrapper[4730]: I0131 16:53:41.102726 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:53:41 crc kubenswrapper[4730]: I0131 16:53:41.103262 4730 scope.go:117] "RemoveContainer" containerID="76cd14f75be0a2e7271e97c2e84874497a20bad6efb9697ecd4ecf25b2af12cd" Jan 31 16:53:41 crc kubenswrapper[4730]: I0131 16:53:41.103317 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:53:41 crc kubenswrapper[4730]: E0131 16:53:41.104064 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:53:41 crc kubenswrapper[4730]: I0131 16:53:41.105449 4730 scope.go:117] "RemoveContainer" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" Jan 31 16:53:41 crc kubenswrapper[4730]: I0131 16:53:41.105476 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:53:41 crc kubenswrapper[4730]: E0131 16:53:41.344571 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:53:42 crc kubenswrapper[4730]: I0131 16:53:42.119704 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095"} Jan 31 16:53:42 crc kubenswrapper[4730]: I0131 16:53:42.120413 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:53:42 crc kubenswrapper[4730]: I0131 16:53:42.121094 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:53:42 crc kubenswrapper[4730]: E0131 16:53:42.121477 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:53:42 crc kubenswrapper[4730]: I0131 16:53:42.653927 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:53:43 crc kubenswrapper[4730]: I0131 16:53:43.139281 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:53:43 crc kubenswrapper[4730]: E0131 16:53:43.139492 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:53:44 crc kubenswrapper[4730]: I0131 16:53:44.147830 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:53:44 crc kubenswrapper[4730]: E0131 16:53:44.148547 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:53:45 crc kubenswrapper[4730]: I0131 16:53:45.167120 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="7141b3c96e8593876e504fdd0590a5d814ff71c190eba021e3cd88de170efd1f" exitCode=1 Jan 31 16:53:45 crc kubenswrapper[4730]: I0131 16:53:45.167254 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"7141b3c96e8593876e504fdd0590a5d814ff71c190eba021e3cd88de170efd1f"} Jan 31 16:53:45 crc kubenswrapper[4730]: I0131 16:53:45.167329 4730 scope.go:117] "RemoveContainer" containerID="57f18dcfb7530a415b40c27dcda7694fcabb603d09c2b77a985646d961881789" Jan 31 16:53:45 crc kubenswrapper[4730]: I0131 16:53:45.168892 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:53:45 crc kubenswrapper[4730]: I0131 16:53:45.169044 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:53:45 crc kubenswrapper[4730]: I0131 16:53:45.169107 4730 scope.go:117] "RemoveContainer" containerID="7141b3c96e8593876e504fdd0590a5d814ff71c190eba021e3cd88de170efd1f" Jan 31 16:53:45 crc kubenswrapper[4730]: I0131 16:53:45.169267 4730 scope.go:117] "RemoveContainer" containerID="76cd14f75be0a2e7271e97c2e84874497a20bad6efb9697ecd4ecf25b2af12cd" Jan 31 16:53:45 crc kubenswrapper[4730]: I0131 16:53:45.169303 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:53:45 crc kubenswrapper[4730]: E0131 16:53:45.170428 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:53:48 crc kubenswrapper[4730]: I0131 16:53:48.664034 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:53:50 crc kubenswrapper[4730]: I0131 16:53:50.662264 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:53:51 crc kubenswrapper[4730]: I0131 16:53:51.657789 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:53:54 crc kubenswrapper[4730]: I0131 16:53:54.659599 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:53:54 crc kubenswrapper[4730]: I0131 16:53:54.660105 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:53:54 crc kubenswrapper[4730]: I0131 16:53:54.660852 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 16:53:54 crc kubenswrapper[4730]: I0131 16:53:54.660878 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:53:54 crc kubenswrapper[4730]: I0131 16:53:54.660916 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" gracePeriod=30 Jan 31 16:53:54 crc kubenswrapper[4730]: I0131 16:53:54.665787 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:53:54 crc kubenswrapper[4730]: E0131 16:53:54.784583 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:53:55 crc kubenswrapper[4730]: I0131 16:53:55.284841 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" exitCode=0 Jan 31 16:53:55 crc kubenswrapper[4730]: I0131 16:53:55.284943 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095"} Jan 31 16:53:55 crc kubenswrapper[4730]: I0131 16:53:55.285206 4730 scope.go:117] "RemoveContainer" containerID="11e9dcdaf868034d0593102b22c10a223d16d0aae7f600557479ddb7bddd94b7" Jan 31 16:53:55 crc kubenswrapper[4730]: I0131 16:53:55.285894 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:53:55 crc kubenswrapper[4730]: I0131 16:53:55.285930 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:53:55 crc kubenswrapper[4730]: E0131 16:53:55.286257 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:53:56 crc kubenswrapper[4730]: I0131 16:53:56.975031 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:53:56 crc kubenswrapper[4730]: I0131 16:53:56.975500 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:54:00 crc kubenswrapper[4730]: I0131 16:54:00.465264 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:54:00 crc kubenswrapper[4730]: I0131 16:54:00.465706 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:54:00 crc kubenswrapper[4730]: I0131 16:54:00.465753 4730 scope.go:117] "RemoveContainer" containerID="7141b3c96e8593876e504fdd0590a5d814ff71c190eba021e3cd88de170efd1f" Jan 31 16:54:00 crc kubenswrapper[4730]: I0131 16:54:00.465883 4730 scope.go:117] "RemoveContainer" containerID="76cd14f75be0a2e7271e97c2e84874497a20bad6efb9697ecd4ecf25b2af12cd" Jan 31 16:54:00 crc kubenswrapper[4730]: I0131 16:54:00.465899 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:54:00 crc kubenswrapper[4730]: E0131 16:54:00.466778 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:54:07 crc kubenswrapper[4730]: I0131 16:54:07.464232 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:54:07 crc kubenswrapper[4730]: I0131 16:54:07.464715 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:54:07 crc kubenswrapper[4730]: E0131 16:54:07.465019 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:54:11 crc kubenswrapper[4730]: I0131 16:54:11.464460 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:54:11 crc kubenswrapper[4730]: I0131 16:54:11.464915 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:54:11 crc kubenswrapper[4730]: I0131 16:54:11.464949 4730 scope.go:117] "RemoveContainer" containerID="7141b3c96e8593876e504fdd0590a5d814ff71c190eba021e3cd88de170efd1f" Jan 31 16:54:11 crc kubenswrapper[4730]: I0131 16:54:11.465009 4730 scope.go:117] "RemoveContainer" containerID="76cd14f75be0a2e7271e97c2e84874497a20bad6efb9697ecd4ecf25b2af12cd" Jan 31 16:54:11 crc kubenswrapper[4730]: I0131 16:54:11.465018 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:54:11 crc kubenswrapper[4730]: E0131 16:54:11.680222 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:54:12 crc kubenswrapper[4730]: I0131 16:54:12.495226 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"1d8a11eb2b8f06bb45046cdfdf9dfba7a50149ebc83464b60ce68e56a82386d9"} Jan 31 16:54:12 crc kubenswrapper[4730]: I0131 16:54:12.496149 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:54:12 crc kubenswrapper[4730]: I0131 16:54:12.496204 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:54:12 crc kubenswrapper[4730]: I0131 16:54:12.496305 4730 scope.go:117] "RemoveContainer" containerID="76cd14f75be0a2e7271e97c2e84874497a20bad6efb9697ecd4ecf25b2af12cd" Jan 31 16:54:12 crc kubenswrapper[4730]: I0131 16:54:12.496314 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:54:12 crc kubenswrapper[4730]: E0131 16:54:12.496581 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:54:20 crc kubenswrapper[4730]: I0131 16:54:20.464959 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:54:20 crc kubenswrapper[4730]: I0131 16:54:20.465587 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:54:20 crc kubenswrapper[4730]: E0131 16:54:20.465881 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:54:26 crc kubenswrapper[4730]: I0131 16:54:26.974999 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:54:26 crc kubenswrapper[4730]: I0131 16:54:26.975635 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:54:27 crc kubenswrapper[4730]: I0131 16:54:27.464397 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:54:27 crc kubenswrapper[4730]: I0131 16:54:27.464464 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:54:27 crc kubenswrapper[4730]: I0131 16:54:27.464538 4730 scope.go:117] "RemoveContainer" containerID="76cd14f75be0a2e7271e97c2e84874497a20bad6efb9697ecd4ecf25b2af12cd" Jan 31 16:54:27 crc kubenswrapper[4730]: I0131 16:54:27.464546 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:54:27 crc kubenswrapper[4730]: E0131 16:54:27.464856 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:54:35 crc kubenswrapper[4730]: I0131 16:54:35.464240 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:54:35 crc kubenswrapper[4730]: I0131 16:54:35.464756 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:54:35 crc kubenswrapper[4730]: E0131 16:54:35.465008 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:54:42 crc kubenswrapper[4730]: I0131 16:54:42.465305 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:54:42 crc kubenswrapper[4730]: I0131 16:54:42.466003 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:54:42 crc kubenswrapper[4730]: I0131 16:54:42.466097 4730 scope.go:117] "RemoveContainer" containerID="76cd14f75be0a2e7271e97c2e84874497a20bad6efb9697ecd4ecf25b2af12cd" Jan 31 16:54:42 crc kubenswrapper[4730]: I0131 16:54:42.466110 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:54:42 crc kubenswrapper[4730]: E0131 16:54:42.466434 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:54:48 crc kubenswrapper[4730]: I0131 16:54:48.466334 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:54:48 crc kubenswrapper[4730]: I0131 16:54:48.466773 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:54:48 crc kubenswrapper[4730]: E0131 16:54:48.467041 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:54:56 crc kubenswrapper[4730]: I0131 16:54:56.974897 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:54:56 crc kubenswrapper[4730]: I0131 16:54:56.975422 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:54:56 crc kubenswrapper[4730]: I0131 16:54:56.975469 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:54:56 crc kubenswrapper[4730]: I0131 16:54:56.976311 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"43b7bb63726524ca697f41266f3bd99562b62d62470c4a1e15fd3ef35c3d68d2"} pod="openshift-machine-config-operator/machine-config-daemon-mzg47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 16:54:56 crc kubenswrapper[4730]: I0131 16:54:56.976378 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" containerID="cri-o://43b7bb63726524ca697f41266f3bd99562b62d62470c4a1e15fd3ef35c3d68d2" gracePeriod=600 Jan 31 16:54:57 crc kubenswrapper[4730]: I0131 16:54:57.465523 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:54:57 crc kubenswrapper[4730]: I0131 16:54:57.465846 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:54:57 crc kubenswrapper[4730]: I0131 16:54:57.465938 4730 scope.go:117] "RemoveContainer" containerID="76cd14f75be0a2e7271e97c2e84874497a20bad6efb9697ecd4ecf25b2af12cd" Jan 31 16:54:57 crc kubenswrapper[4730]: I0131 16:54:57.465947 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:54:57 crc kubenswrapper[4730]: E0131 16:54:57.466482 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:54:57 crc kubenswrapper[4730]: I0131 16:54:57.982506 4730 generic.go:334] "Generic (PLEG): container finished" podID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerID="43b7bb63726524ca697f41266f3bd99562b62d62470c4a1e15fd3ef35c3d68d2" exitCode=0 Jan 31 16:54:57 crc kubenswrapper[4730]: I0131 16:54:57.982579 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerDied","Data":"43b7bb63726524ca697f41266f3bd99562b62d62470c4a1e15fd3ef35c3d68d2"} Jan 31 16:54:57 crc kubenswrapper[4730]: I0131 16:54:57.982846 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerStarted","Data":"1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d"} Jan 31 16:54:57 crc kubenswrapper[4730]: I0131 16:54:57.982863 4730 scope.go:117] "RemoveContainer" containerID="21bc1c0d1795b476dc0a7f952823b035db816e9829905fa6afc3669ea169eecc" Jan 31 16:54:59 crc kubenswrapper[4730]: I0131 16:54:59.466831 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:54:59 crc kubenswrapper[4730]: I0131 16:54:59.467206 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:54:59 crc kubenswrapper[4730]: E0131 16:54:59.467705 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:55:10 crc kubenswrapper[4730]: I0131 16:55:10.465682 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:55:10 crc kubenswrapper[4730]: I0131 16:55:10.466191 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:55:10 crc kubenswrapper[4730]: I0131 16:55:10.466266 4730 scope.go:117] "RemoveContainer" containerID="76cd14f75be0a2e7271e97c2e84874497a20bad6efb9697ecd4ecf25b2af12cd" Jan 31 16:55:10 crc kubenswrapper[4730]: I0131 16:55:10.466273 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:55:10 crc kubenswrapper[4730]: E0131 16:55:10.694852 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:55:11 crc kubenswrapper[4730]: I0131 16:55:11.108931 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563"} Jan 31 16:55:11 crc kubenswrapper[4730]: I0131 16:55:11.110027 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:55:11 crc kubenswrapper[4730]: I0131 16:55:11.110147 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:55:11 crc kubenswrapper[4730]: I0131 16:55:11.110338 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:55:11 crc kubenswrapper[4730]: E0131 16:55:11.110935 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:55:12 crc kubenswrapper[4730]: I0131 16:55:12.464205 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:55:12 crc kubenswrapper[4730]: I0131 16:55:12.465481 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:55:12 crc kubenswrapper[4730]: E0131 16:55:12.465937 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:55:21 crc kubenswrapper[4730]: I0131 16:55:21.050098 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:55:21 crc kubenswrapper[4730]: E0131 16:55:21.050735 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 16:55:21 crc kubenswrapper[4730]: E0131 16:55:21.051258 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 16:57:23.051229065 +0000 UTC m=+1629.857286021 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 16:55:21 crc kubenswrapper[4730]: I0131 16:55:21.059202 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-b7vmm"] Jan 31 16:55:21 crc kubenswrapper[4730]: I0131 16:55:21.068403 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-b7vmm"] Jan 31 16:55:22 crc kubenswrapper[4730]: I0131 16:55:22.078737 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1d8b-account-create-update-5d7p8"] Jan 31 16:55:22 crc kubenswrapper[4730]: I0131 16:55:22.106494 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-sjqqh"] Jan 31 16:55:22 crc kubenswrapper[4730]: I0131 16:55:22.120454 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a81a-account-create-update-482zr"] Jan 31 16:55:22 crc kubenswrapper[4730]: I0131 16:55:22.130561 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1d8b-account-create-update-5d7p8"] Jan 31 16:55:22 crc kubenswrapper[4730]: I0131 16:55:22.138383 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-v5qvf"] Jan 31 16:55:22 crc kubenswrapper[4730]: I0131 16:55:22.146474 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d7a2-account-create-update-9gqnx"] Jan 31 16:55:22 crc kubenswrapper[4730]: I0131 16:55:22.153876 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-sjqqh"] Jan 31 16:55:22 crc kubenswrapper[4730]: I0131 16:55:22.160703 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a81a-account-create-update-482zr"] Jan 31 16:55:22 crc kubenswrapper[4730]: I0131 16:55:22.166920 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-v5qvf"] Jan 31 16:55:22 crc kubenswrapper[4730]: I0131 16:55:22.173732 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d7a2-account-create-update-9gqnx"] Jan 31 16:55:22 crc kubenswrapper[4730]: I0131 16:55:22.472968 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2964865c-12e5-4d18-bd62-16629f4a1090" path="/var/lib/kubelet/pods/2964865c-12e5-4d18-bd62-16629f4a1090/volumes" Jan 31 16:55:22 crc kubenswrapper[4730]: I0131 16:55:22.473670 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31af8919-7a56-4384-9ee9-edf256738e2d" path="/var/lib/kubelet/pods/31af8919-7a56-4384-9ee9-edf256738e2d/volumes" Jan 31 16:55:22 crc kubenswrapper[4730]: I0131 16:55:22.474198 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45c2561b-5ed5-4508-b5a9-b4179c91ac72" path="/var/lib/kubelet/pods/45c2561b-5ed5-4508-b5a9-b4179c91ac72/volumes" Jan 31 16:55:22 crc kubenswrapper[4730]: I0131 16:55:22.474720 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d28b8af-a349-44fa-8e46-ec5c26389dff" path="/var/lib/kubelet/pods/8d28b8af-a349-44fa-8e46-ec5c26389dff/volumes" Jan 31 16:55:22 crc kubenswrapper[4730]: I0131 16:55:22.475851 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5c1ddc8-93ef-4228-aa5b-05989e77b3ac" path="/var/lib/kubelet/pods/b5c1ddc8-93ef-4228-aa5b-05989e77b3ac/volumes" Jan 31 16:55:22 crc kubenswrapper[4730]: I0131 16:55:22.476384 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5dc6e44-d1e4-4d5e-a83e-f2223e70f013" path="/var/lib/kubelet/pods/b5dc6e44-d1e4-4d5e-a83e-f2223e70f013/volumes" Jan 31 16:55:22 crc kubenswrapper[4730]: E0131 16:55:22.858570 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 16:55:23 crc kubenswrapper[4730]: I0131 16:55:23.229742 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:55:23 crc kubenswrapper[4730]: I0131 16:55:23.465755 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:55:23 crc kubenswrapper[4730]: I0131 16:55:23.466268 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:55:23 crc kubenswrapper[4730]: I0131 16:55:23.466644 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:55:23 crc kubenswrapper[4730]: E0131 16:55:23.467364 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:55:26 crc kubenswrapper[4730]: I0131 16:55:26.464922 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:55:26 crc kubenswrapper[4730]: I0131 16:55:26.465570 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:55:26 crc kubenswrapper[4730]: E0131 16:55:26.466020 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:55:32 crc kubenswrapper[4730]: I0131 16:55:32.116490 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c9pcz"] Jan 31 16:55:32 crc kubenswrapper[4730]: I0131 16:55:32.118718 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c9pcz" Jan 31 16:55:32 crc kubenswrapper[4730]: I0131 16:55:32.148898 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c9pcz"] Jan 31 16:55:32 crc kubenswrapper[4730]: I0131 16:55:32.296130 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-catalog-content\") pod \"community-operators-c9pcz\" (UID: \"33282a36-e1bc-4220-b2e1-8d20b65c3bd0\") " pod="openshift-marketplace/community-operators-c9pcz" Jan 31 16:55:32 crc kubenswrapper[4730]: I0131 16:55:32.296222 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-utilities\") pod \"community-operators-c9pcz\" (UID: \"33282a36-e1bc-4220-b2e1-8d20b65c3bd0\") " pod="openshift-marketplace/community-operators-c9pcz" Jan 31 16:55:32 crc kubenswrapper[4730]: I0131 16:55:32.296282 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffs8b\" (UniqueName: \"kubernetes.io/projected/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-kube-api-access-ffs8b\") pod \"community-operators-c9pcz\" (UID: \"33282a36-e1bc-4220-b2e1-8d20b65c3bd0\") " pod="openshift-marketplace/community-operators-c9pcz" Jan 31 16:55:32 crc kubenswrapper[4730]: I0131 16:55:32.398510 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-catalog-content\") pod \"community-operators-c9pcz\" (UID: \"33282a36-e1bc-4220-b2e1-8d20b65c3bd0\") " pod="openshift-marketplace/community-operators-c9pcz" Jan 31 16:55:32 crc kubenswrapper[4730]: I0131 16:55:32.398567 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-utilities\") pod \"community-operators-c9pcz\" (UID: \"33282a36-e1bc-4220-b2e1-8d20b65c3bd0\") " pod="openshift-marketplace/community-operators-c9pcz" Jan 31 16:55:32 crc kubenswrapper[4730]: I0131 16:55:32.398611 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffs8b\" (UniqueName: \"kubernetes.io/projected/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-kube-api-access-ffs8b\") pod \"community-operators-c9pcz\" (UID: \"33282a36-e1bc-4220-b2e1-8d20b65c3bd0\") " pod="openshift-marketplace/community-operators-c9pcz" Jan 31 16:55:32 crc kubenswrapper[4730]: I0131 16:55:32.399341 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-catalog-content\") pod \"community-operators-c9pcz\" (UID: \"33282a36-e1bc-4220-b2e1-8d20b65c3bd0\") " pod="openshift-marketplace/community-operators-c9pcz" Jan 31 16:55:32 crc kubenswrapper[4730]: I0131 16:55:32.399594 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-utilities\") pod \"community-operators-c9pcz\" (UID: \"33282a36-e1bc-4220-b2e1-8d20b65c3bd0\") " pod="openshift-marketplace/community-operators-c9pcz" Jan 31 16:55:32 crc kubenswrapper[4730]: I0131 16:55:32.422637 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffs8b\" (UniqueName: \"kubernetes.io/projected/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-kube-api-access-ffs8b\") pod \"community-operators-c9pcz\" (UID: \"33282a36-e1bc-4220-b2e1-8d20b65c3bd0\") " pod="openshift-marketplace/community-operators-c9pcz" Jan 31 16:55:32 crc kubenswrapper[4730]: I0131 16:55:32.438353 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c9pcz" Jan 31 16:55:33 crc kubenswrapper[4730]: I0131 16:55:33.398048 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c9pcz"] Jan 31 16:55:34 crc kubenswrapper[4730]: I0131 16:55:34.354658 4730 generic.go:334] "Generic (PLEG): container finished" podID="33282a36-e1bc-4220-b2e1-8d20b65c3bd0" containerID="382b9f14c7a0f40bd5ab7a473dd1012299faaebd49d25d49bed64e7488c81301" exitCode=0 Jan 31 16:55:34 crc kubenswrapper[4730]: I0131 16:55:34.354754 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9pcz" event={"ID":"33282a36-e1bc-4220-b2e1-8d20b65c3bd0","Type":"ContainerDied","Data":"382b9f14c7a0f40bd5ab7a473dd1012299faaebd49d25d49bed64e7488c81301"} Jan 31 16:55:34 crc kubenswrapper[4730]: I0131 16:55:34.354951 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9pcz" event={"ID":"33282a36-e1bc-4220-b2e1-8d20b65c3bd0","Type":"ContainerStarted","Data":"c50e5739f0bed2d59ef8e642963733f5f9d140197ad747be3dbda190b39fd0a5"} Jan 31 16:55:34 crc kubenswrapper[4730]: I0131 16:55:34.357897 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 16:55:35 crc kubenswrapper[4730]: I0131 16:55:35.367135 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9pcz" event={"ID":"33282a36-e1bc-4220-b2e1-8d20b65c3bd0","Type":"ContainerStarted","Data":"4267008d2c5e9902722486355964fd0c5d5a2450171ef1df094a81f89e699dfd"} Jan 31 16:55:35 crc kubenswrapper[4730]: I0131 16:55:35.466250 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:55:35 crc kubenswrapper[4730]: I0131 16:55:35.466398 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:55:35 crc kubenswrapper[4730]: I0131 16:55:35.466607 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:55:35 crc kubenswrapper[4730]: E0131 16:55:35.467332 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:55:37 crc kubenswrapper[4730]: I0131 16:55:37.389003 4730 generic.go:334] "Generic (PLEG): container finished" podID="33282a36-e1bc-4220-b2e1-8d20b65c3bd0" containerID="4267008d2c5e9902722486355964fd0c5d5a2450171ef1df094a81f89e699dfd" exitCode=0 Jan 31 16:55:37 crc kubenswrapper[4730]: I0131 16:55:37.389065 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9pcz" event={"ID":"33282a36-e1bc-4220-b2e1-8d20b65c3bd0","Type":"ContainerDied","Data":"4267008d2c5e9902722486355964fd0c5d5a2450171ef1df094a81f89e699dfd"} Jan 31 16:55:38 crc kubenswrapper[4730]: I0131 16:55:38.402627 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9pcz" event={"ID":"33282a36-e1bc-4220-b2e1-8d20b65c3bd0","Type":"ContainerStarted","Data":"dc9384a00ec3b6e2e7a8b09438a7ad6530d11ba65be7425a4d80efa03a1b4ad0"} Jan 31 16:55:38 crc kubenswrapper[4730]: I0131 16:55:38.433514 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c9pcz" podStartSLOduration=2.983004495 podStartE2EDuration="6.43349438s" podCreationTimestamp="2026-01-31 16:55:32 +0000 UTC" firstStartedPulling="2026-01-31 16:55:34.357597887 +0000 UTC m=+1521.163654813" lastFinishedPulling="2026-01-31 16:55:37.808087732 +0000 UTC m=+1524.614144698" observedRunningTime="2026-01-31 16:55:38.427689989 +0000 UTC m=+1525.233746955" watchObservedRunningTime="2026-01-31 16:55:38.43349438 +0000 UTC m=+1525.239551316" Jan 31 16:55:39 crc kubenswrapper[4730]: I0131 16:55:39.589307 4730 scope.go:117] "RemoveContainer" containerID="21c4985406e9b3864e245ea03fb6ba6e3887ad59b19bf9cb146fdb5156ad45eb" Jan 31 16:55:39 crc kubenswrapper[4730]: I0131 16:55:39.628074 4730 scope.go:117] "RemoveContainer" containerID="1b1f974a4da052be1b62137faad994d34d2bd00606ed02f18aeb3589a9d62b78" Jan 31 16:55:39 crc kubenswrapper[4730]: I0131 16:55:39.658118 4730 scope.go:117] "RemoveContainer" containerID="c97089a23a6c29990274e3123b803082b780944b17217c01debf09eebc67230d" Jan 31 16:55:39 crc kubenswrapper[4730]: I0131 16:55:39.693047 4730 scope.go:117] "RemoveContainer" containerID="4bf47bf5d412ac417c8e5e5795018bddf82c37a4882326f8403dcd690825a72b" Jan 31 16:55:39 crc kubenswrapper[4730]: I0131 16:55:39.714066 4730 scope.go:117] "RemoveContainer" containerID="ebbe0b53b96b9b99998df66a005a4b19c3b7c2936a4b35225f5ecf872890775e" Jan 31 16:55:39 crc kubenswrapper[4730]: I0131 16:55:39.750951 4730 scope.go:117] "RemoveContainer" containerID="130fc790319cb61672dbcf7fc52cf14bcfced6c2addeafce6e91ee87e759514c" Jan 31 16:55:39 crc kubenswrapper[4730]: I0131 16:55:39.772104 4730 scope.go:117] "RemoveContainer" containerID="9773ec7d9f8a0b05b588227024fdecbef35b171eef74fcce48fb674c87c0e0b8" Jan 31 16:55:39 crc kubenswrapper[4730]: I0131 16:55:39.811442 4730 scope.go:117] "RemoveContainer" containerID="68b0d561dbc914741e6f1e7c54792963052193dfea00eb9eb40b4b446131d9b1" Jan 31 16:55:40 crc kubenswrapper[4730]: I0131 16:55:40.464150 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:55:40 crc kubenswrapper[4730]: I0131 16:55:40.464465 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:55:40 crc kubenswrapper[4730]: E0131 16:55:40.464916 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:55:42 crc kubenswrapper[4730]: I0131 16:55:42.438843 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c9pcz" Jan 31 16:55:42 crc kubenswrapper[4730]: I0131 16:55:42.439283 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c9pcz" Jan 31 16:55:43 crc kubenswrapper[4730]: I0131 16:55:43.500525 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c9pcz" podUID="33282a36-e1bc-4220-b2e1-8d20b65c3bd0" containerName="registry-server" probeResult="failure" output=< Jan 31 16:55:43 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Jan 31 16:55:43 crc kubenswrapper[4730]: > Jan 31 16:55:44 crc kubenswrapper[4730]: I0131 16:55:44.074231 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2b0d-account-create-update-rh8s6"] Jan 31 16:55:44 crc kubenswrapper[4730]: I0131 16:55:44.087842 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-1dce-account-create-update-2crhq"] Jan 31 16:55:44 crc kubenswrapper[4730]: I0131 16:55:44.099042 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2b0d-account-create-update-rh8s6"] Jan 31 16:55:44 crc kubenswrapper[4730]: I0131 16:55:44.108322 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-1dce-account-create-update-2crhq"] Jan 31 16:55:44 crc kubenswrapper[4730]: I0131 16:55:44.476597 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2111311f-b72a-4c59-84a4-4c97bfa06105" path="/var/lib/kubelet/pods/2111311f-b72a-4c59-84a4-4c97bfa06105/volumes" Jan 31 16:55:44 crc kubenswrapper[4730]: I0131 16:55:44.477537 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b163a61-8109-4989-ada6-8e408c05448d" path="/var/lib/kubelet/pods/6b163a61-8109-4989-ada6-8e408c05448d/volumes" Jan 31 16:55:46 crc kubenswrapper[4730]: I0131 16:55:46.466317 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:55:46 crc kubenswrapper[4730]: I0131 16:55:46.466581 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:55:46 crc kubenswrapper[4730]: I0131 16:55:46.466668 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:55:46 crc kubenswrapper[4730]: E0131 16:55:46.466925 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:55:47 crc kubenswrapper[4730]: I0131 16:55:47.032608 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-l4plp"] Jan 31 16:55:47 crc kubenswrapper[4730]: I0131 16:55:47.043583 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lv6bn"] Jan 31 16:55:47 crc kubenswrapper[4730]: I0131 16:55:47.054173 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-l4plp"] Jan 31 16:55:47 crc kubenswrapper[4730]: I0131 16:55:47.062378 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lv6bn"] Jan 31 16:55:48 crc kubenswrapper[4730]: I0131 16:55:48.043911 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-547f-account-create-update-4pbzk"] Jan 31 16:55:48 crc kubenswrapper[4730]: I0131 16:55:48.060930 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-v46sw"] Jan 31 16:55:48 crc kubenswrapper[4730]: I0131 16:55:48.076521 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-ltdm6"] Jan 31 16:55:48 crc kubenswrapper[4730]: I0131 16:55:48.084852 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-ltdm6"] Jan 31 16:55:48 crc kubenswrapper[4730]: I0131 16:55:48.091683 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-547f-account-create-update-4pbzk"] Jan 31 16:55:48 crc kubenswrapper[4730]: I0131 16:55:48.098454 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-v46sw"] Jan 31 16:55:48 crc kubenswrapper[4730]: I0131 16:55:48.481358 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c3c71a4-f2bf-46da-9c7d-c7c4dba19585" path="/var/lib/kubelet/pods/1c3c71a4-f2bf-46da-9c7d-c7c4dba19585/volumes" Jan 31 16:55:48 crc kubenswrapper[4730]: I0131 16:55:48.482861 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48dba275-7242-434b-b55e-1c62a25c7c1a" path="/var/lib/kubelet/pods/48dba275-7242-434b-b55e-1c62a25c7c1a/volumes" Jan 31 16:55:48 crc kubenswrapper[4730]: I0131 16:55:48.483980 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="892cfb30-014c-4cdf-8822-dbcbe7dea46c" path="/var/lib/kubelet/pods/892cfb30-014c-4cdf-8822-dbcbe7dea46c/volumes" Jan 31 16:55:48 crc kubenswrapper[4730]: I0131 16:55:48.485096 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8fb99c8-b28a-450a-8692-e585216fbc53" path="/var/lib/kubelet/pods/c8fb99c8-b28a-450a-8692-e585216fbc53/volumes" Jan 31 16:55:48 crc kubenswrapper[4730]: I0131 16:55:48.486331 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9f2bffc-75d1-4da3-be48-728edaf3e0be" path="/var/lib/kubelet/pods/d9f2bffc-75d1-4da3-be48-728edaf3e0be/volumes" Jan 31 16:55:52 crc kubenswrapper[4730]: I0131 16:55:52.046053 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-5vxrp"] Jan 31 16:55:52 crc kubenswrapper[4730]: I0131 16:55:52.058151 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-5vxrp"] Jan 31 16:55:52 crc kubenswrapper[4730]: I0131 16:55:52.464428 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:55:52 crc kubenswrapper[4730]: I0131 16:55:52.464455 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:55:52 crc kubenswrapper[4730]: E0131 16:55:52.464696 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:55:52 crc kubenswrapper[4730]: I0131 16:55:52.474331 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="627cf9cc-1e11-455d-b186-f159d4eed39c" path="/var/lib/kubelet/pods/627cf9cc-1e11-455d-b186-f159d4eed39c/volumes" Jan 31 16:55:52 crc kubenswrapper[4730]: I0131 16:55:52.489214 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c9pcz" Jan 31 16:55:52 crc kubenswrapper[4730]: I0131 16:55:52.534153 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c9pcz" Jan 31 16:55:52 crc kubenswrapper[4730]: I0131 16:55:52.728868 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c9pcz"] Jan 31 16:55:53 crc kubenswrapper[4730]: I0131 16:55:53.041591 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4dcfm"] Jan 31 16:55:53 crc kubenswrapper[4730]: I0131 16:55:53.047948 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4dcfm"] Jan 31 16:55:53 crc kubenswrapper[4730]: I0131 16:55:53.534415 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c9pcz" podUID="33282a36-e1bc-4220-b2e1-8d20b65c3bd0" containerName="registry-server" containerID="cri-o://dc9384a00ec3b6e2e7a8b09438a7ad6530d11ba65be7425a4d80efa03a1b4ad0" gracePeriod=2 Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.102285 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c9pcz" Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.265055 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffs8b\" (UniqueName: \"kubernetes.io/projected/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-kube-api-access-ffs8b\") pod \"33282a36-e1bc-4220-b2e1-8d20b65c3bd0\" (UID: \"33282a36-e1bc-4220-b2e1-8d20b65c3bd0\") " Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.265169 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-utilities\") pod \"33282a36-e1bc-4220-b2e1-8d20b65c3bd0\" (UID: \"33282a36-e1bc-4220-b2e1-8d20b65c3bd0\") " Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.265202 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-catalog-content\") pod \"33282a36-e1bc-4220-b2e1-8d20b65c3bd0\" (UID: \"33282a36-e1bc-4220-b2e1-8d20b65c3bd0\") " Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.266606 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-utilities" (OuterVolumeSpecName: "utilities") pod "33282a36-e1bc-4220-b2e1-8d20b65c3bd0" (UID: "33282a36-e1bc-4220-b2e1-8d20b65c3bd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.273968 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-kube-api-access-ffs8b" (OuterVolumeSpecName: "kube-api-access-ffs8b") pod "33282a36-e1bc-4220-b2e1-8d20b65c3bd0" (UID: "33282a36-e1bc-4220-b2e1-8d20b65c3bd0"). InnerVolumeSpecName "kube-api-access-ffs8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.312346 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33282a36-e1bc-4220-b2e1-8d20b65c3bd0" (UID: "33282a36-e1bc-4220-b2e1-8d20b65c3bd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.367187 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffs8b\" (UniqueName: \"kubernetes.io/projected/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-kube-api-access-ffs8b\") on node \"crc\" DevicePath \"\"" Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.367212 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.367222 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33282a36-e1bc-4220-b2e1-8d20b65c3bd0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.475555 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97354e1f-e4b3-4f45-a9f6-58d1932e9f45" path="/var/lib/kubelet/pods/97354e1f-e4b3-4f45-a9f6-58d1932e9f45/volumes" Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.544856 4730 generic.go:334] "Generic (PLEG): container finished" podID="33282a36-e1bc-4220-b2e1-8d20b65c3bd0" containerID="dc9384a00ec3b6e2e7a8b09438a7ad6530d11ba65be7425a4d80efa03a1b4ad0" exitCode=0 Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.544915 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9pcz" event={"ID":"33282a36-e1bc-4220-b2e1-8d20b65c3bd0","Type":"ContainerDied","Data":"dc9384a00ec3b6e2e7a8b09438a7ad6530d11ba65be7425a4d80efa03a1b4ad0"} Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.544958 4730 scope.go:117] "RemoveContainer" containerID="dc9384a00ec3b6e2e7a8b09438a7ad6530d11ba65be7425a4d80efa03a1b4ad0" Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.544957 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c9pcz" Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.545253 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c9pcz" event={"ID":"33282a36-e1bc-4220-b2e1-8d20b65c3bd0","Type":"ContainerDied","Data":"c50e5739f0bed2d59ef8e642963733f5f9d140197ad747be3dbda190b39fd0a5"} Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.581246 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c9pcz"] Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.589179 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c9pcz"] Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.592274 4730 scope.go:117] "RemoveContainer" containerID="4267008d2c5e9902722486355964fd0c5d5a2450171ef1df094a81f89e699dfd" Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.617334 4730 scope.go:117] "RemoveContainer" containerID="382b9f14c7a0f40bd5ab7a473dd1012299faaebd49d25d49bed64e7488c81301" Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.675271 4730 scope.go:117] "RemoveContainer" containerID="dc9384a00ec3b6e2e7a8b09438a7ad6530d11ba65be7425a4d80efa03a1b4ad0" Jan 31 16:55:54 crc kubenswrapper[4730]: E0131 16:55:54.675738 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc9384a00ec3b6e2e7a8b09438a7ad6530d11ba65be7425a4d80efa03a1b4ad0\": container with ID starting with dc9384a00ec3b6e2e7a8b09438a7ad6530d11ba65be7425a4d80efa03a1b4ad0 not found: ID does not exist" containerID="dc9384a00ec3b6e2e7a8b09438a7ad6530d11ba65be7425a4d80efa03a1b4ad0" Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.675771 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc9384a00ec3b6e2e7a8b09438a7ad6530d11ba65be7425a4d80efa03a1b4ad0"} err="failed to get container status \"dc9384a00ec3b6e2e7a8b09438a7ad6530d11ba65be7425a4d80efa03a1b4ad0\": rpc error: code = NotFound desc = could not find container \"dc9384a00ec3b6e2e7a8b09438a7ad6530d11ba65be7425a4d80efa03a1b4ad0\": container with ID starting with dc9384a00ec3b6e2e7a8b09438a7ad6530d11ba65be7425a4d80efa03a1b4ad0 not found: ID does not exist" Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.675792 4730 scope.go:117] "RemoveContainer" containerID="4267008d2c5e9902722486355964fd0c5d5a2450171ef1df094a81f89e699dfd" Jan 31 16:55:54 crc kubenswrapper[4730]: E0131 16:55:54.676176 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4267008d2c5e9902722486355964fd0c5d5a2450171ef1df094a81f89e699dfd\": container with ID starting with 4267008d2c5e9902722486355964fd0c5d5a2450171ef1df094a81f89e699dfd not found: ID does not exist" containerID="4267008d2c5e9902722486355964fd0c5d5a2450171ef1df094a81f89e699dfd" Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.676216 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4267008d2c5e9902722486355964fd0c5d5a2450171ef1df094a81f89e699dfd"} err="failed to get container status \"4267008d2c5e9902722486355964fd0c5d5a2450171ef1df094a81f89e699dfd\": rpc error: code = NotFound desc = could not find container \"4267008d2c5e9902722486355964fd0c5d5a2450171ef1df094a81f89e699dfd\": container with ID starting with 4267008d2c5e9902722486355964fd0c5d5a2450171ef1df094a81f89e699dfd not found: ID does not exist" Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.676245 4730 scope.go:117] "RemoveContainer" containerID="382b9f14c7a0f40bd5ab7a473dd1012299faaebd49d25d49bed64e7488c81301" Jan 31 16:55:54 crc kubenswrapper[4730]: E0131 16:55:54.676507 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"382b9f14c7a0f40bd5ab7a473dd1012299faaebd49d25d49bed64e7488c81301\": container with ID starting with 382b9f14c7a0f40bd5ab7a473dd1012299faaebd49d25d49bed64e7488c81301 not found: ID does not exist" containerID="382b9f14c7a0f40bd5ab7a473dd1012299faaebd49d25d49bed64e7488c81301" Jan 31 16:55:54 crc kubenswrapper[4730]: I0131 16:55:54.676526 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"382b9f14c7a0f40bd5ab7a473dd1012299faaebd49d25d49bed64e7488c81301"} err="failed to get container status \"382b9f14c7a0f40bd5ab7a473dd1012299faaebd49d25d49bed64e7488c81301\": rpc error: code = NotFound desc = could not find container \"382b9f14c7a0f40bd5ab7a473dd1012299faaebd49d25d49bed64e7488c81301\": container with ID starting with 382b9f14c7a0f40bd5ab7a473dd1012299faaebd49d25d49bed64e7488c81301 not found: ID does not exist" Jan 31 16:55:56 crc kubenswrapper[4730]: I0131 16:55:56.476092 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33282a36-e1bc-4220-b2e1-8d20b65c3bd0" path="/var/lib/kubelet/pods/33282a36-e1bc-4220-b2e1-8d20b65c3bd0/volumes" Jan 31 16:56:00 crc kubenswrapper[4730]: I0131 16:56:00.464704 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:56:00 crc kubenswrapper[4730]: I0131 16:56:00.465385 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:56:00 crc kubenswrapper[4730]: I0131 16:56:00.465507 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:56:00 crc kubenswrapper[4730]: E0131 16:56:00.465868 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:56:06 crc kubenswrapper[4730]: I0131 16:56:06.465649 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:56:06 crc kubenswrapper[4730]: I0131 16:56:06.466579 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:56:06 crc kubenswrapper[4730]: E0131 16:56:06.466880 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:56:09 crc kubenswrapper[4730]: I0131 16:56:09.708403 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="1d8a11eb2b8f06bb45046cdfdf9dfba7a50149ebc83464b60ce68e56a82386d9" exitCode=1 Jan 31 16:56:09 crc kubenswrapper[4730]: I0131 16:56:09.708462 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"1d8a11eb2b8f06bb45046cdfdf9dfba7a50149ebc83464b60ce68e56a82386d9"} Jan 31 16:56:09 crc kubenswrapper[4730]: I0131 16:56:09.709696 4730 scope.go:117] "RemoveContainer" containerID="7141b3c96e8593876e504fdd0590a5d814ff71c190eba021e3cd88de170efd1f" Jan 31 16:56:09 crc kubenswrapper[4730]: I0131 16:56:09.711470 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:56:09 crc kubenswrapper[4730]: I0131 16:56:09.711619 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:56:09 crc kubenswrapper[4730]: I0131 16:56:09.711675 4730 scope.go:117] "RemoveContainer" containerID="1d8a11eb2b8f06bb45046cdfdf9dfba7a50149ebc83464b60ce68e56a82386d9" Jan 31 16:56:09 crc kubenswrapper[4730]: I0131 16:56:09.711937 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:56:09 crc kubenswrapper[4730]: E0131 16:56:09.712637 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:56:19 crc kubenswrapper[4730]: I0131 16:56:19.464321 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:56:19 crc kubenswrapper[4730]: I0131 16:56:19.464892 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:56:19 crc kubenswrapper[4730]: E0131 16:56:19.465246 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:56:23 crc kubenswrapper[4730]: I0131 16:56:23.465094 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:56:23 crc kubenswrapper[4730]: I0131 16:56:23.465673 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:56:23 crc kubenswrapper[4730]: I0131 16:56:23.465706 4730 scope.go:117] "RemoveContainer" containerID="1d8a11eb2b8f06bb45046cdfdf9dfba7a50149ebc83464b60ce68e56a82386d9" Jan 31 16:56:23 crc kubenswrapper[4730]: I0131 16:56:23.465781 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:56:23 crc kubenswrapper[4730]: E0131 16:56:23.466153 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:56:32 crc kubenswrapper[4730]: I0131 16:56:32.465182 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:56:32 crc kubenswrapper[4730]: I0131 16:56:32.465686 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:56:32 crc kubenswrapper[4730]: E0131 16:56:32.465943 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:56:33 crc kubenswrapper[4730]: I0131 16:56:33.075947 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-rw222"] Jan 31 16:56:33 crc kubenswrapper[4730]: I0131 16:56:33.088267 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-rw222"] Jan 31 16:56:33 crc kubenswrapper[4730]: I0131 16:56:33.789349 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kjqb6"] Jan 31 16:56:33 crc kubenswrapper[4730]: E0131 16:56:33.789773 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33282a36-e1bc-4220-b2e1-8d20b65c3bd0" containerName="extract-utilities" Jan 31 16:56:33 crc kubenswrapper[4730]: I0131 16:56:33.789791 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="33282a36-e1bc-4220-b2e1-8d20b65c3bd0" containerName="extract-utilities" Jan 31 16:56:33 crc kubenswrapper[4730]: E0131 16:56:33.789839 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33282a36-e1bc-4220-b2e1-8d20b65c3bd0" containerName="extract-content" Jan 31 16:56:33 crc kubenswrapper[4730]: I0131 16:56:33.789847 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="33282a36-e1bc-4220-b2e1-8d20b65c3bd0" containerName="extract-content" Jan 31 16:56:33 crc kubenswrapper[4730]: E0131 16:56:33.789881 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33282a36-e1bc-4220-b2e1-8d20b65c3bd0" containerName="registry-server" Jan 31 16:56:33 crc kubenswrapper[4730]: I0131 16:56:33.789894 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="33282a36-e1bc-4220-b2e1-8d20b65c3bd0" containerName="registry-server" Jan 31 16:56:33 crc kubenswrapper[4730]: I0131 16:56:33.790159 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="33282a36-e1bc-4220-b2e1-8d20b65c3bd0" containerName="registry-server" Jan 31 16:56:33 crc kubenswrapper[4730]: I0131 16:56:33.791700 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kjqb6" Jan 31 16:56:33 crc kubenswrapper[4730]: I0131 16:56:33.799483 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kjqb6"] Jan 31 16:56:33 crc kubenswrapper[4730]: I0131 16:56:33.940461 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-467xm\" (UniqueName: \"kubernetes.io/projected/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-kube-api-access-467xm\") pod \"redhat-marketplace-kjqb6\" (UID: \"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a\") " pod="openshift-marketplace/redhat-marketplace-kjqb6" Jan 31 16:56:33 crc kubenswrapper[4730]: I0131 16:56:33.940505 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-utilities\") pod \"redhat-marketplace-kjqb6\" (UID: \"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a\") " pod="openshift-marketplace/redhat-marketplace-kjqb6" Jan 31 16:56:33 crc kubenswrapper[4730]: I0131 16:56:33.940548 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-catalog-content\") pod \"redhat-marketplace-kjqb6\" (UID: \"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a\") " pod="openshift-marketplace/redhat-marketplace-kjqb6" Jan 31 16:56:34 crc kubenswrapper[4730]: I0131 16:56:34.043403 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-467xm\" (UniqueName: \"kubernetes.io/projected/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-kube-api-access-467xm\") pod \"redhat-marketplace-kjqb6\" (UID: \"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a\") " pod="openshift-marketplace/redhat-marketplace-kjqb6" Jan 31 16:56:34 crc kubenswrapper[4730]: I0131 16:56:34.043458 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-utilities\") pod \"redhat-marketplace-kjqb6\" (UID: \"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a\") " pod="openshift-marketplace/redhat-marketplace-kjqb6" Jan 31 16:56:34 crc kubenswrapper[4730]: I0131 16:56:34.043507 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-catalog-content\") pod \"redhat-marketplace-kjqb6\" (UID: \"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a\") " pod="openshift-marketplace/redhat-marketplace-kjqb6" Jan 31 16:56:34 crc kubenswrapper[4730]: I0131 16:56:34.044058 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-utilities\") pod \"redhat-marketplace-kjqb6\" (UID: \"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a\") " pod="openshift-marketplace/redhat-marketplace-kjqb6" Jan 31 16:56:34 crc kubenswrapper[4730]: I0131 16:56:34.044120 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-catalog-content\") pod \"redhat-marketplace-kjqb6\" (UID: \"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a\") " pod="openshift-marketplace/redhat-marketplace-kjqb6" Jan 31 16:56:34 crc kubenswrapper[4730]: I0131 16:56:34.066863 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-467xm\" (UniqueName: \"kubernetes.io/projected/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-kube-api-access-467xm\") pod \"redhat-marketplace-kjqb6\" (UID: \"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a\") " pod="openshift-marketplace/redhat-marketplace-kjqb6" Jan 31 16:56:34 crc kubenswrapper[4730]: I0131 16:56:34.112023 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kjqb6" Jan 31 16:56:34 crc kubenswrapper[4730]: I0131 16:56:34.472351 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:56:34 crc kubenswrapper[4730]: I0131 16:56:34.472681 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:56:34 crc kubenswrapper[4730]: I0131 16:56:34.472704 4730 scope.go:117] "RemoveContainer" containerID="1d8a11eb2b8f06bb45046cdfdf9dfba7a50149ebc83464b60ce68e56a82386d9" Jan 31 16:56:34 crc kubenswrapper[4730]: I0131 16:56:34.472763 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:56:34 crc kubenswrapper[4730]: E0131 16:56:34.473118 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:56:34 crc kubenswrapper[4730]: I0131 16:56:34.477989 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cf9dbf3-9160-439f-96d0-4437019ae012" path="/var/lib/kubelet/pods/7cf9dbf3-9160-439f-96d0-4437019ae012/volumes" Jan 31 16:56:34 crc kubenswrapper[4730]: I0131 16:56:34.567077 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kjqb6"] Jan 31 16:56:34 crc kubenswrapper[4730]: I0131 16:56:34.959609 4730 generic.go:334] "Generic (PLEG): container finished" podID="aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a" containerID="8e543650f809ab84768ba262c6443e6dcaf9bb2f6afbd59ec991ce0865d1be5d" exitCode=0 Jan 31 16:56:34 crc kubenswrapper[4730]: I0131 16:56:34.959662 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjqb6" event={"ID":"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a","Type":"ContainerDied","Data":"8e543650f809ab84768ba262c6443e6dcaf9bb2f6afbd59ec991ce0865d1be5d"} Jan 31 16:56:34 crc kubenswrapper[4730]: I0131 16:56:34.959691 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjqb6" event={"ID":"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a","Type":"ContainerStarted","Data":"2ffcef2cf6b6db1d1cf7fbd9c8c94c4fec6eb3e4d3114f1be5669600dadbb9f2"} Jan 31 16:56:36 crc kubenswrapper[4730]: I0131 16:56:36.978566 4730 generic.go:334] "Generic (PLEG): container finished" podID="aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a" containerID="3340f6a6314d25eaa057b16a5e6c4366a335927ac06845d36624564e1edf2f19" exitCode=0 Jan 31 16:56:36 crc kubenswrapper[4730]: I0131 16:56:36.978821 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjqb6" event={"ID":"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a","Type":"ContainerDied","Data":"3340f6a6314d25eaa057b16a5e6c4366a335927ac06845d36624564e1edf2f19"} Jan 31 16:56:37 crc kubenswrapper[4730]: I0131 16:56:37.993161 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjqb6" event={"ID":"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a","Type":"ContainerStarted","Data":"f2a10b7f775a9ec19e56d0335f8877cd49f08f8a123ecc18ca06962e045db65c"} Jan 31 16:56:38 crc kubenswrapper[4730]: I0131 16:56:38.026747 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kjqb6" podStartSLOduration=2.560504733 podStartE2EDuration="5.026727933s" podCreationTimestamp="2026-01-31 16:56:33 +0000 UTC" firstStartedPulling="2026-01-31 16:56:34.962987809 +0000 UTC m=+1581.769044725" lastFinishedPulling="2026-01-31 16:56:37.429210999 +0000 UTC m=+1584.235267925" observedRunningTime="2026-01-31 16:56:38.017889308 +0000 UTC m=+1584.823946244" watchObservedRunningTime="2026-01-31 16:56:38.026727933 +0000 UTC m=+1584.832784859" Jan 31 16:56:39 crc kubenswrapper[4730]: I0131 16:56:39.982785 4730 scope.go:117] "RemoveContainer" containerID="784173489c1fdf0e74003f738ded67dfae8a15956196405075bf03ccdfd982d3" Jan 31 16:56:40 crc kubenswrapper[4730]: I0131 16:56:40.046191 4730 scope.go:117] "RemoveContainer" containerID="419be2ea72d4aaae301c506a03356a157440daa24c27e7cc315f008cb5342da8" Jan 31 16:56:40 crc kubenswrapper[4730]: I0131 16:56:40.081159 4730 scope.go:117] "RemoveContainer" containerID="895cab6b16eb7a353f8c1bee26fe81294ee5929f5fd129be54f1b3481abf3bd9" Jan 31 16:56:40 crc kubenswrapper[4730]: I0131 16:56:40.148728 4730 scope.go:117] "RemoveContainer" containerID="6ad0c710e6c3c5af532d6d645f91243c6d5988b7372da3e6280a2db905129930" Jan 31 16:56:40 crc kubenswrapper[4730]: I0131 16:56:40.177546 4730 scope.go:117] "RemoveContainer" containerID="082494acdd299598f4b5087889203890c64e231997812e4cd45b3a029662d476" Jan 31 16:56:40 crc kubenswrapper[4730]: I0131 16:56:40.227444 4730 scope.go:117] "RemoveContainer" containerID="9062dbede20d796ce638c375a0c9eb1a4f176849690456f200fcdb19c576f593" Jan 31 16:56:40 crc kubenswrapper[4730]: I0131 16:56:40.270076 4730 scope.go:117] "RemoveContainer" containerID="4f07dcc150b2023774fde7bb4915ca967d8b8644c88104b9125b3fac66c92813" Jan 31 16:56:40 crc kubenswrapper[4730]: I0131 16:56:40.307312 4730 scope.go:117] "RemoveContainer" containerID="4e6f6b95da70e5c197514d2d0a23e4491b78add6b0e8c9997c68fca337e92683" Jan 31 16:56:40 crc kubenswrapper[4730]: I0131 16:56:40.343431 4730 scope.go:117] "RemoveContainer" containerID="415c363beca5282f0080d3caf153f786edadf0d8213ad3a4683cf2a16c0bce64" Jan 31 16:56:40 crc kubenswrapper[4730]: I0131 16:56:40.373709 4730 scope.go:117] "RemoveContainer" containerID="3f4d0a9e999cf1d51a29a8d9e0aab5d18604850cecc4322a909b0ebe4fdeb3ad" Jan 31 16:56:44 crc kubenswrapper[4730]: I0131 16:56:44.112260 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kjqb6" Jan 31 16:56:44 crc kubenswrapper[4730]: I0131 16:56:44.113538 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kjqb6" Jan 31 16:56:44 crc kubenswrapper[4730]: I0131 16:56:44.197559 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kjqb6" Jan 31 16:56:45 crc kubenswrapper[4730]: I0131 16:56:45.140859 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kjqb6" Jan 31 16:56:45 crc kubenswrapper[4730]: I0131 16:56:45.194006 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kjqb6"] Jan 31 16:56:45 crc kubenswrapper[4730]: I0131 16:56:45.464107 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:56:45 crc kubenswrapper[4730]: I0131 16:56:45.464147 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:56:45 crc kubenswrapper[4730]: E0131 16:56:45.464525 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:56:47 crc kubenswrapper[4730]: I0131 16:56:47.107033 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kjqb6" podUID="aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a" containerName="registry-server" containerID="cri-o://f2a10b7f775a9ec19e56d0335f8877cd49f08f8a123ecc18ca06962e045db65c" gracePeriod=2 Jan 31 16:56:47 crc kubenswrapper[4730]: I0131 16:56:47.464735 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:56:47 crc kubenswrapper[4730]: I0131 16:56:47.465137 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:56:47 crc kubenswrapper[4730]: I0131 16:56:47.465161 4730 scope.go:117] "RemoveContainer" containerID="1d8a11eb2b8f06bb45046cdfdf9dfba7a50149ebc83464b60ce68e56a82386d9" Jan 31 16:56:47 crc kubenswrapper[4730]: I0131 16:56:47.465220 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:56:47 crc kubenswrapper[4730]: E0131 16:56:47.465588 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:56:47 crc kubenswrapper[4730]: I0131 16:56:47.561932 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kjqb6" Jan 31 16:56:47 crc kubenswrapper[4730]: I0131 16:56:47.660058 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-467xm\" (UniqueName: \"kubernetes.io/projected/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-kube-api-access-467xm\") pod \"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a\" (UID: \"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a\") " Jan 31 16:56:47 crc kubenswrapper[4730]: I0131 16:56:47.660456 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-catalog-content\") pod \"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a\" (UID: \"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a\") " Jan 31 16:56:47 crc kubenswrapper[4730]: I0131 16:56:47.660596 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-utilities\") pod \"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a\" (UID: \"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a\") " Jan 31 16:56:47 crc kubenswrapper[4730]: I0131 16:56:47.661354 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-utilities" (OuterVolumeSpecName: "utilities") pod "aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a" (UID: "aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:56:47 crc kubenswrapper[4730]: I0131 16:56:47.670590 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-kube-api-access-467xm" (OuterVolumeSpecName: "kube-api-access-467xm") pod "aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a" (UID: "aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a"). InnerVolumeSpecName "kube-api-access-467xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:56:47 crc kubenswrapper[4730]: I0131 16:56:47.690451 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a" (UID: "aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:56:47 crc kubenswrapper[4730]: I0131 16:56:47.762697 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-467xm\" (UniqueName: \"kubernetes.io/projected/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-kube-api-access-467xm\") on node \"crc\" DevicePath \"\"" Jan 31 16:56:47 crc kubenswrapper[4730]: I0131 16:56:47.762734 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:56:47 crc kubenswrapper[4730]: I0131 16:56:47.762746 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.048882 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-qwdrx"] Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.058585 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-qpskq"] Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.069469 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-qpskq"] Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.077306 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-qwdrx"] Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.122586 4730 generic.go:334] "Generic (PLEG): container finished" podID="aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a" containerID="f2a10b7f775a9ec19e56d0335f8877cd49f08f8a123ecc18ca06962e045db65c" exitCode=0 Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.122642 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjqb6" event={"ID":"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a","Type":"ContainerDied","Data":"f2a10b7f775a9ec19e56d0335f8877cd49f08f8a123ecc18ca06962e045db65c"} Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.122681 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kjqb6" event={"ID":"aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a","Type":"ContainerDied","Data":"2ffcef2cf6b6db1d1cf7fbd9c8c94c4fec6eb3e4d3114f1be5669600dadbb9f2"} Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.122713 4730 scope.go:117] "RemoveContainer" containerID="f2a10b7f775a9ec19e56d0335f8877cd49f08f8a123ecc18ca06962e045db65c" Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.122925 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kjqb6" Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.171796 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kjqb6"] Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.177960 4730 scope.go:117] "RemoveContainer" containerID="3340f6a6314d25eaa057b16a5e6c4366a335927ac06845d36624564e1edf2f19" Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.185868 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kjqb6"] Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.206004 4730 scope.go:117] "RemoveContainer" containerID="8e543650f809ab84768ba262c6443e6dcaf9bb2f6afbd59ec991ce0865d1be5d" Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.271757 4730 scope.go:117] "RemoveContainer" containerID="f2a10b7f775a9ec19e56d0335f8877cd49f08f8a123ecc18ca06962e045db65c" Jan 31 16:56:48 crc kubenswrapper[4730]: E0131 16:56:48.272210 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2a10b7f775a9ec19e56d0335f8877cd49f08f8a123ecc18ca06962e045db65c\": container with ID starting with f2a10b7f775a9ec19e56d0335f8877cd49f08f8a123ecc18ca06962e045db65c not found: ID does not exist" containerID="f2a10b7f775a9ec19e56d0335f8877cd49f08f8a123ecc18ca06962e045db65c" Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.272255 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2a10b7f775a9ec19e56d0335f8877cd49f08f8a123ecc18ca06962e045db65c"} err="failed to get container status \"f2a10b7f775a9ec19e56d0335f8877cd49f08f8a123ecc18ca06962e045db65c\": rpc error: code = NotFound desc = could not find container \"f2a10b7f775a9ec19e56d0335f8877cd49f08f8a123ecc18ca06962e045db65c\": container with ID starting with f2a10b7f775a9ec19e56d0335f8877cd49f08f8a123ecc18ca06962e045db65c not found: ID does not exist" Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.272286 4730 scope.go:117] "RemoveContainer" containerID="3340f6a6314d25eaa057b16a5e6c4366a335927ac06845d36624564e1edf2f19" Jan 31 16:56:48 crc kubenswrapper[4730]: E0131 16:56:48.272603 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3340f6a6314d25eaa057b16a5e6c4366a335927ac06845d36624564e1edf2f19\": container with ID starting with 3340f6a6314d25eaa057b16a5e6c4366a335927ac06845d36624564e1edf2f19 not found: ID does not exist" containerID="3340f6a6314d25eaa057b16a5e6c4366a335927ac06845d36624564e1edf2f19" Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.272629 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3340f6a6314d25eaa057b16a5e6c4366a335927ac06845d36624564e1edf2f19"} err="failed to get container status \"3340f6a6314d25eaa057b16a5e6c4366a335927ac06845d36624564e1edf2f19\": rpc error: code = NotFound desc = could not find container \"3340f6a6314d25eaa057b16a5e6c4366a335927ac06845d36624564e1edf2f19\": container with ID starting with 3340f6a6314d25eaa057b16a5e6c4366a335927ac06845d36624564e1edf2f19 not found: ID does not exist" Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.272645 4730 scope.go:117] "RemoveContainer" containerID="8e543650f809ab84768ba262c6443e6dcaf9bb2f6afbd59ec991ce0865d1be5d" Jan 31 16:56:48 crc kubenswrapper[4730]: E0131 16:56:48.272980 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e543650f809ab84768ba262c6443e6dcaf9bb2f6afbd59ec991ce0865d1be5d\": container with ID starting with 8e543650f809ab84768ba262c6443e6dcaf9bb2f6afbd59ec991ce0865d1be5d not found: ID does not exist" containerID="8e543650f809ab84768ba262c6443e6dcaf9bb2f6afbd59ec991ce0865d1be5d" Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.273019 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e543650f809ab84768ba262c6443e6dcaf9bb2f6afbd59ec991ce0865d1be5d"} err="failed to get container status \"8e543650f809ab84768ba262c6443e6dcaf9bb2f6afbd59ec991ce0865d1be5d\": rpc error: code = NotFound desc = could not find container \"8e543650f809ab84768ba262c6443e6dcaf9bb2f6afbd59ec991ce0865d1be5d\": container with ID starting with 8e543650f809ab84768ba262c6443e6dcaf9bb2f6afbd59ec991ce0865d1be5d not found: ID does not exist" Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.477924 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60776ef1-a236-4e56-a837-ccb57d6474a9" path="/var/lib/kubelet/pods/60776ef1-a236-4e56-a837-ccb57d6474a9/volumes" Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.478841 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a" path="/var/lib/kubelet/pods/aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a/volumes" Jan 31 16:56:48 crc kubenswrapper[4730]: I0131 16:56:48.479940 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1243bfc-8196-4501-9b35-89e359501a00" path="/var/lib/kubelet/pods/f1243bfc-8196-4501-9b35-89e359501a00/volumes" Jan 31 16:56:55 crc kubenswrapper[4730]: I0131 16:56:55.063627 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-wkj2z"] Jan 31 16:56:55 crc kubenswrapper[4730]: I0131 16:56:55.073015 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-wkj2z"] Jan 31 16:56:56 crc kubenswrapper[4730]: I0131 16:56:56.489649 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fd279f9-efa4-4fb3-a6e0-655de1c20403" path="/var/lib/kubelet/pods/2fd279f9-efa4-4fb3-a6e0-655de1c20403/volumes" Jan 31 16:56:58 crc kubenswrapper[4730]: I0131 16:56:58.464917 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:56:58 crc kubenswrapper[4730]: I0131 16:56:58.465154 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:56:58 crc kubenswrapper[4730]: E0131 16:56:58.465458 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:56:58 crc kubenswrapper[4730]: I0131 16:56:58.466122 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:56:58 crc kubenswrapper[4730]: I0131 16:56:58.466293 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:56:58 crc kubenswrapper[4730]: I0131 16:56:58.466372 4730 scope.go:117] "RemoveContainer" containerID="1d8a11eb2b8f06bb45046cdfdf9dfba7a50149ebc83464b60ce68e56a82386d9" Jan 31 16:56:58 crc kubenswrapper[4730]: I0131 16:56:58.466528 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:56:59 crc kubenswrapper[4730]: I0131 16:56:59.251142 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" exitCode=1 Jan 31 16:56:59 crc kubenswrapper[4730]: I0131 16:56:59.251297 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"17f7a33830c8777b805c6edba65283177f0229b21a224ed3b5e8e58184905db3"} Jan 31 16:56:59 crc kubenswrapper[4730]: I0131 16:56:59.251569 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d"} Jan 31 16:56:59 crc kubenswrapper[4730]: I0131 16:56:59.251592 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4"} Jan 31 16:56:59 crc kubenswrapper[4730]: I0131 16:56:59.251618 4730 scope.go:117] "RemoveContainer" containerID="0b80ae9b7b3adcb111c958f3ea58e2c7fd6bdf6c9ebd6638b1473cd003c3b3fc" Jan 31 16:57:00 crc kubenswrapper[4730]: I0131 16:57:00.301103 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" exitCode=1 Jan 31 16:57:00 crc kubenswrapper[4730]: I0131 16:57:00.301465 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" exitCode=1 Jan 31 16:57:00 crc kubenswrapper[4730]: I0131 16:57:00.301317 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d"} Jan 31 16:57:00 crc kubenswrapper[4730]: I0131 16:57:00.301509 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc"} Jan 31 16:57:00 crc kubenswrapper[4730]: I0131 16:57:00.301534 4730 scope.go:117] "RemoveContainer" containerID="86cdc668acad50b58e89a5388ace8c7e08557c4d6908bfda285093ec52b84d49" Jan 31 16:57:00 crc kubenswrapper[4730]: I0131 16:57:00.302359 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 16:57:00 crc kubenswrapper[4730]: I0131 16:57:00.302504 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 16:57:00 crc kubenswrapper[4730]: I0131 16:57:00.302698 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 16:57:00 crc kubenswrapper[4730]: E0131 16:57:00.303280 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:57:00 crc kubenswrapper[4730]: I0131 16:57:00.388866 4730 scope.go:117] "RemoveContainer" containerID="adc18c527490b378e136b0e4275b305a54442556cb0bd339514b20196d2ea071" Jan 31 16:57:01 crc kubenswrapper[4730]: I0131 16:57:01.325488 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 16:57:01 crc kubenswrapper[4730]: I0131 16:57:01.325595 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 16:57:01 crc kubenswrapper[4730]: I0131 16:57:01.325747 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 16:57:01 crc kubenswrapper[4730]: E0131 16:57:01.326366 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:57:03 crc kubenswrapper[4730]: I0131 16:57:03.047744 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-xfklz"] Jan 31 16:57:03 crc kubenswrapper[4730]: I0131 16:57:03.061724 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-xfklz"] Jan 31 16:57:04 crc kubenswrapper[4730]: I0131 16:57:04.477048 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53655839-53b2-46cb-b859-fdb3376bc398" path="/var/lib/kubelet/pods/53655839-53b2-46cb-b859-fdb3376bc398/volumes" Jan 31 16:57:13 crc kubenswrapper[4730]: I0131 16:57:13.465109 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:57:13 crc kubenswrapper[4730]: I0131 16:57:13.465521 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:57:13 crc kubenswrapper[4730]: E0131 16:57:13.465713 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:57:15 crc kubenswrapper[4730]: I0131 16:57:15.464759 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 16:57:15 crc kubenswrapper[4730]: I0131 16:57:15.465996 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 16:57:15 crc kubenswrapper[4730]: I0131 16:57:15.466193 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 16:57:15 crc kubenswrapper[4730]: E0131 16:57:15.466621 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:57:23 crc kubenswrapper[4730]: I0131 16:57:23.086290 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:57:23 crc kubenswrapper[4730]: E0131 16:57:23.086531 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 16:57:23 crc kubenswrapper[4730]: E0131 16:57:23.087422 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 16:59:25.08739235 +0000 UTC m=+1751.893449306 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 16:57:25 crc kubenswrapper[4730]: I0131 16:57:25.464598 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:57:25 crc kubenswrapper[4730]: I0131 16:57:25.464973 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:57:25 crc kubenswrapper[4730]: E0131 16:57:25.465378 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:57:26 crc kubenswrapper[4730]: E0131 16:57:26.231632 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 16:57:26 crc kubenswrapper[4730]: I0131 16:57:26.609160 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:57:26 crc kubenswrapper[4730]: I0131 16:57:26.975162 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:57:26 crc kubenswrapper[4730]: I0131 16:57:26.975232 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:57:27 crc kubenswrapper[4730]: I0131 16:57:27.465630 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 16:57:27 crc kubenswrapper[4730]: I0131 16:57:27.465718 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 16:57:27 crc kubenswrapper[4730]: I0131 16:57:27.465860 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 16:57:27 crc kubenswrapper[4730]: E0131 16:57:27.466193 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:57:28 crc kubenswrapper[4730]: I0131 16:57:28.598581 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8g6t7"] Jan 31 16:57:28 crc kubenswrapper[4730]: E0131 16:57:28.598961 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a" containerName="registry-server" Jan 31 16:57:28 crc kubenswrapper[4730]: I0131 16:57:28.598973 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a" containerName="registry-server" Jan 31 16:57:28 crc kubenswrapper[4730]: E0131 16:57:28.598999 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a" containerName="extract-content" Jan 31 16:57:28 crc kubenswrapper[4730]: I0131 16:57:28.599004 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a" containerName="extract-content" Jan 31 16:57:28 crc kubenswrapper[4730]: E0131 16:57:28.599021 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a" containerName="extract-utilities" Jan 31 16:57:28 crc kubenswrapper[4730]: I0131 16:57:28.599027 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a" containerName="extract-utilities" Jan 31 16:57:28 crc kubenswrapper[4730]: I0131 16:57:28.599209 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec9a72c-8c4d-4ab7-b2aa-4adde0b08b7a" containerName="registry-server" Jan 31 16:57:28 crc kubenswrapper[4730]: I0131 16:57:28.600365 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8g6t7" Jan 31 16:57:28 crc kubenswrapper[4730]: I0131 16:57:28.620569 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8g6t7"] Jan 31 16:57:28 crc kubenswrapper[4730]: I0131 16:57:28.703919 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-catalog-content\") pod \"redhat-operators-8g6t7\" (UID: \"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed\") " pod="openshift-marketplace/redhat-operators-8g6t7" Jan 31 16:57:28 crc kubenswrapper[4730]: I0131 16:57:28.703994 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqmrh\" (UniqueName: \"kubernetes.io/projected/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-kube-api-access-nqmrh\") pod \"redhat-operators-8g6t7\" (UID: \"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed\") " pod="openshift-marketplace/redhat-operators-8g6t7" Jan 31 16:57:28 crc kubenswrapper[4730]: I0131 16:57:28.704075 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-utilities\") pod \"redhat-operators-8g6t7\" (UID: \"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed\") " pod="openshift-marketplace/redhat-operators-8g6t7" Jan 31 16:57:28 crc kubenswrapper[4730]: I0131 16:57:28.806281 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-catalog-content\") pod \"redhat-operators-8g6t7\" (UID: \"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed\") " pod="openshift-marketplace/redhat-operators-8g6t7" Jan 31 16:57:28 crc kubenswrapper[4730]: I0131 16:57:28.806333 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqmrh\" (UniqueName: \"kubernetes.io/projected/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-kube-api-access-nqmrh\") pod \"redhat-operators-8g6t7\" (UID: \"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed\") " pod="openshift-marketplace/redhat-operators-8g6t7" Jan 31 16:57:28 crc kubenswrapper[4730]: I0131 16:57:28.806366 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-utilities\") pod \"redhat-operators-8g6t7\" (UID: \"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed\") " pod="openshift-marketplace/redhat-operators-8g6t7" Jan 31 16:57:28 crc kubenswrapper[4730]: I0131 16:57:28.806888 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-utilities\") pod \"redhat-operators-8g6t7\" (UID: \"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed\") " pod="openshift-marketplace/redhat-operators-8g6t7" Jan 31 16:57:28 crc kubenswrapper[4730]: I0131 16:57:28.807032 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-catalog-content\") pod \"redhat-operators-8g6t7\" (UID: \"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed\") " pod="openshift-marketplace/redhat-operators-8g6t7" Jan 31 16:57:28 crc kubenswrapper[4730]: I0131 16:57:28.826353 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqmrh\" (UniqueName: \"kubernetes.io/projected/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-kube-api-access-nqmrh\") pod \"redhat-operators-8g6t7\" (UID: \"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed\") " pod="openshift-marketplace/redhat-operators-8g6t7" Jan 31 16:57:28 crc kubenswrapper[4730]: I0131 16:57:28.920295 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8g6t7" Jan 31 16:57:29 crc kubenswrapper[4730]: I0131 16:57:29.380288 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8g6t7"] Jan 31 16:57:29 crc kubenswrapper[4730]: I0131 16:57:29.633078 4730 generic.go:334] "Generic (PLEG): container finished" podID="a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" containerID="bdc2fec8fae53cd8acb372362ee426204e7a6466dda434ffeeae4b6506604cda" exitCode=0 Jan 31 16:57:29 crc kubenswrapper[4730]: I0131 16:57:29.633176 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g6t7" event={"ID":"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed","Type":"ContainerDied","Data":"bdc2fec8fae53cd8acb372362ee426204e7a6466dda434ffeeae4b6506604cda"} Jan 31 16:57:29 crc kubenswrapper[4730]: I0131 16:57:29.633314 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g6t7" event={"ID":"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed","Type":"ContainerStarted","Data":"7ad03bf00bfb2528c63ddbddf0aff44fba905ad9e3be48c4ab19d7f0b4f9fd34"} Jan 31 16:57:30 crc kubenswrapper[4730]: I0131 16:57:30.641034 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g6t7" event={"ID":"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed","Type":"ContainerStarted","Data":"e4601a5792c12e06dc4ef1e38d6a87245402b38fe1ffbbe638453ae066c4faf7"} Jan 31 16:57:35 crc kubenswrapper[4730]: I0131 16:57:35.692694 4730 generic.go:334] "Generic (PLEG): container finished" podID="a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" containerID="e4601a5792c12e06dc4ef1e38d6a87245402b38fe1ffbbe638453ae066c4faf7" exitCode=0 Jan 31 16:57:35 crc kubenswrapper[4730]: I0131 16:57:35.692860 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g6t7" event={"ID":"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed","Type":"ContainerDied","Data":"e4601a5792c12e06dc4ef1e38d6a87245402b38fe1ffbbe638453ae066c4faf7"} Jan 31 16:57:36 crc kubenswrapper[4730]: I0131 16:57:36.463794 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:57:36 crc kubenswrapper[4730]: I0131 16:57:36.464052 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:57:36 crc kubenswrapper[4730]: E0131 16:57:36.464332 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:57:36 crc kubenswrapper[4730]: I0131 16:57:36.703420 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g6t7" event={"ID":"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed","Type":"ContainerStarted","Data":"ff9acb721dfc14304ff8ab8854b7635299577061a7a0702a9066656ca67b8adb"} Jan 31 16:57:36 crc kubenswrapper[4730]: I0131 16:57:36.725003 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8g6t7" podStartSLOduration=2.258309184 podStartE2EDuration="8.72498261s" podCreationTimestamp="2026-01-31 16:57:28 +0000 UTC" firstStartedPulling="2026-01-31 16:57:29.635237773 +0000 UTC m=+1636.441294689" lastFinishedPulling="2026-01-31 16:57:36.101911199 +0000 UTC m=+1642.907968115" observedRunningTime="2026-01-31 16:57:36.72284188 +0000 UTC m=+1643.528898786" watchObservedRunningTime="2026-01-31 16:57:36.72498261 +0000 UTC m=+1643.531039526" Jan 31 16:57:38 crc kubenswrapper[4730]: I0131 16:57:38.465345 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 16:57:38 crc kubenswrapper[4730]: I0131 16:57:38.465708 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 16:57:38 crc kubenswrapper[4730]: I0131 16:57:38.465862 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 16:57:38 crc kubenswrapper[4730]: E0131 16:57:38.466293 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:57:38 crc kubenswrapper[4730]: I0131 16:57:38.920949 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8g6t7" Jan 31 16:57:38 crc kubenswrapper[4730]: I0131 16:57:38.921019 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8g6t7" Jan 31 16:57:39 crc kubenswrapper[4730]: I0131 16:57:39.964733 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8g6t7" podUID="a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" containerName="registry-server" probeResult="failure" output=< Jan 31 16:57:39 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Jan 31 16:57:39 crc kubenswrapper[4730]: > Jan 31 16:57:40 crc kubenswrapper[4730]: I0131 16:57:40.628317 4730 scope.go:117] "RemoveContainer" containerID="40be573340b10cb3c61e30fe8e2cf52895d46d55f706c6158a5680c75321aca9" Jan 31 16:57:40 crc kubenswrapper[4730]: I0131 16:57:40.671843 4730 scope.go:117] "RemoveContainer" containerID="1eda9ad6eb506fb6820f116265d9d58d5a39d69480873128b089af6f5d2c078f" Jan 31 16:57:40 crc kubenswrapper[4730]: I0131 16:57:40.708346 4730 scope.go:117] "RemoveContainer" containerID="8aca09008a0d1c8b61f105f17f9581ec956efa657ae788587ccb0e38e29e1a05" Jan 31 16:57:40 crc kubenswrapper[4730]: I0131 16:57:40.765961 4730 scope.go:117] "RemoveContainer" containerID="d98c34e03192a3f9bd62a9607de7d72a09e66464a566381ee903f28e2cd9c66e" Jan 31 16:57:49 crc kubenswrapper[4730]: I0131 16:57:49.464047 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:57:49 crc kubenswrapper[4730]: I0131 16:57:49.464537 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:57:49 crc kubenswrapper[4730]: E0131 16:57:49.464793 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:57:49 crc kubenswrapper[4730]: I0131 16:57:49.966511 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8g6t7" podUID="a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" containerName="registry-server" probeResult="failure" output=< Jan 31 16:57:49 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Jan 31 16:57:49 crc kubenswrapper[4730]: > Jan 31 16:57:52 crc kubenswrapper[4730]: I0131 16:57:52.469454 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 16:57:52 crc kubenswrapper[4730]: I0131 16:57:52.469844 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 16:57:52 crc kubenswrapper[4730]: I0131 16:57:52.469957 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 16:57:52 crc kubenswrapper[4730]: E0131 16:57:52.470319 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:57:56 crc kubenswrapper[4730]: I0131 16:57:56.974581 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:57:56 crc kubenswrapper[4730]: I0131 16:57:56.975097 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:57:59 crc kubenswrapper[4730]: I0131 16:57:59.042489 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4b7f-account-create-update-vk2h2"] Jan 31 16:57:59 crc kubenswrapper[4730]: I0131 16:57:59.049259 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-jqxt9"] Jan 31 16:57:59 crc kubenswrapper[4730]: I0131 16:57:59.056138 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4b7f-account-create-update-vk2h2"] Jan 31 16:57:59 crc kubenswrapper[4730]: I0131 16:57:59.063205 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-jqxt9"] Jan 31 16:57:59 crc kubenswrapper[4730]: I0131 16:57:59.975384 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8g6t7" podUID="a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" containerName="registry-server" probeResult="failure" output=< Jan 31 16:57:59 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Jan 31 16:57:59 crc kubenswrapper[4730]: > Jan 31 16:58:00 crc kubenswrapper[4730]: I0131 16:58:00.036502 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-99f6t"] Jan 31 16:58:00 crc kubenswrapper[4730]: I0131 16:58:00.046132 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7lgq6"] Jan 31 16:58:00 crc kubenswrapper[4730]: I0131 16:58:00.055161 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-af11-account-create-update-sxvz7"] Jan 31 16:58:00 crc kubenswrapper[4730]: I0131 16:58:00.062791 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-72f6-account-create-update-b8qp4"] Jan 31 16:58:00 crc kubenswrapper[4730]: I0131 16:58:00.073893 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-99f6t"] Jan 31 16:58:00 crc kubenswrapper[4730]: I0131 16:58:00.084275 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7lgq6"] Jan 31 16:58:00 crc kubenswrapper[4730]: I0131 16:58:00.093516 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-af11-account-create-update-sxvz7"] Jan 31 16:58:00 crc kubenswrapper[4730]: I0131 16:58:00.099864 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-72f6-account-create-update-b8qp4"] Jan 31 16:58:00 crc kubenswrapper[4730]: I0131 16:58:00.481977 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05a13f5b-ba5a-4fe2-b395-29562d21fd40" path="/var/lib/kubelet/pods/05a13f5b-ba5a-4fe2-b395-29562d21fd40/volumes" Jan 31 16:58:00 crc kubenswrapper[4730]: I0131 16:58:00.483322 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bfad8d5-bd15-41a8-858c-ffd981537c79" path="/var/lib/kubelet/pods/2bfad8d5-bd15-41a8-858c-ffd981537c79/volumes" Jan 31 16:58:00 crc kubenswrapper[4730]: I0131 16:58:00.484086 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5934f8bc-1134-40af-8af2-57ffcbfddda3" path="/var/lib/kubelet/pods/5934f8bc-1134-40af-8af2-57ffcbfddda3/volumes" Jan 31 16:58:00 crc kubenswrapper[4730]: I0131 16:58:00.484908 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="723811c5-3b5b-4e22-806c-682826895b32" path="/var/lib/kubelet/pods/723811c5-3b5b-4e22-806c-682826895b32/volumes" Jan 31 16:58:00 crc kubenswrapper[4730]: I0131 16:58:00.486262 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4f85271-c4d1-43fe-95ad-b88443d14a9a" path="/var/lib/kubelet/pods/e4f85271-c4d1-43fe-95ad-b88443d14a9a/volumes" Jan 31 16:58:00 crc kubenswrapper[4730]: I0131 16:58:00.487086 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f52f18ff-5693-4ec1-ba5d-9df137257c40" path="/var/lib/kubelet/pods/f52f18ff-5693-4ec1-ba5d-9df137257c40/volumes" Jan 31 16:58:04 crc kubenswrapper[4730]: I0131 16:58:04.466307 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:58:04 crc kubenswrapper[4730]: I0131 16:58:04.466789 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:58:04 crc kubenswrapper[4730]: E0131 16:58:04.467048 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:58:07 crc kubenswrapper[4730]: I0131 16:58:07.465347 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 16:58:07 crc kubenswrapper[4730]: I0131 16:58:07.465762 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 16:58:07 crc kubenswrapper[4730]: I0131 16:58:07.465889 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 16:58:07 crc kubenswrapper[4730]: E0131 16:58:07.466222 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:58:09 crc kubenswrapper[4730]: I0131 16:58:09.009095 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8g6t7" Jan 31 16:58:09 crc kubenswrapper[4730]: I0131 16:58:09.224729 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8g6t7" Jan 31 16:58:09 crc kubenswrapper[4730]: I0131 16:58:09.276342 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8g6t7"] Jan 31 16:58:10 crc kubenswrapper[4730]: I0131 16:58:10.983440 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8g6t7" podUID="a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" containerName="registry-server" containerID="cri-o://ff9acb721dfc14304ff8ab8854b7635299577061a7a0702a9066656ca67b8adb" gracePeriod=2 Jan 31 16:58:11 crc kubenswrapper[4730]: I0131 16:58:11.424611 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8g6t7" Jan 31 16:58:11 crc kubenswrapper[4730]: I0131 16:58:11.546058 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqmrh\" (UniqueName: \"kubernetes.io/projected/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-kube-api-access-nqmrh\") pod \"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed\" (UID: \"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed\") " Jan 31 16:58:11 crc kubenswrapper[4730]: I0131 16:58:11.546259 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-catalog-content\") pod \"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed\" (UID: \"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed\") " Jan 31 16:58:11 crc kubenswrapper[4730]: I0131 16:58:11.546297 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-utilities\") pod \"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed\" (UID: \"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed\") " Jan 31 16:58:11 crc kubenswrapper[4730]: I0131 16:58:11.548426 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-utilities" (OuterVolumeSpecName: "utilities") pod "a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" (UID: "a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:58:11 crc kubenswrapper[4730]: I0131 16:58:11.556215 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-kube-api-access-nqmrh" (OuterVolumeSpecName: "kube-api-access-nqmrh") pod "a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" (UID: "a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed"). InnerVolumeSpecName "kube-api-access-nqmrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:58:11 crc kubenswrapper[4730]: I0131 16:58:11.648751 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqmrh\" (UniqueName: \"kubernetes.io/projected/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-kube-api-access-nqmrh\") on node \"crc\" DevicePath \"\"" Jan 31 16:58:11 crc kubenswrapper[4730]: I0131 16:58:11.648791 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:58:11 crc kubenswrapper[4730]: I0131 16:58:11.656566 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" (UID: "a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:58:11 crc kubenswrapper[4730]: I0131 16:58:11.750705 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:58:11 crc kubenswrapper[4730]: I0131 16:58:11.991880 4730 generic.go:334] "Generic (PLEG): container finished" podID="a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" containerID="ff9acb721dfc14304ff8ab8854b7635299577061a7a0702a9066656ca67b8adb" exitCode=0 Jan 31 16:58:11 crc kubenswrapper[4730]: I0131 16:58:11.991916 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g6t7" event={"ID":"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed","Type":"ContainerDied","Data":"ff9acb721dfc14304ff8ab8854b7635299577061a7a0702a9066656ca67b8adb"} Jan 31 16:58:11 crc kubenswrapper[4730]: I0131 16:58:11.991941 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g6t7" event={"ID":"a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed","Type":"ContainerDied","Data":"7ad03bf00bfb2528c63ddbddf0aff44fba905ad9e3be48c4ab19d7f0b4f9fd34"} Jan 31 16:58:11 crc kubenswrapper[4730]: I0131 16:58:11.991959 4730 scope.go:117] "RemoveContainer" containerID="ff9acb721dfc14304ff8ab8854b7635299577061a7a0702a9066656ca67b8adb" Jan 31 16:58:11 crc kubenswrapper[4730]: I0131 16:58:11.992068 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8g6t7" Jan 31 16:58:12 crc kubenswrapper[4730]: I0131 16:58:12.013918 4730 scope.go:117] "RemoveContainer" containerID="e4601a5792c12e06dc4ef1e38d6a87245402b38fe1ffbbe638453ae066c4faf7" Jan 31 16:58:12 crc kubenswrapper[4730]: I0131 16:58:12.027669 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8g6t7"] Jan 31 16:58:12 crc kubenswrapper[4730]: I0131 16:58:12.043161 4730 scope.go:117] "RemoveContainer" containerID="bdc2fec8fae53cd8acb372362ee426204e7a6466dda434ffeeae4b6506604cda" Jan 31 16:58:12 crc kubenswrapper[4730]: I0131 16:58:12.043571 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8g6t7"] Jan 31 16:58:12 crc kubenswrapper[4730]: I0131 16:58:12.101291 4730 scope.go:117] "RemoveContainer" containerID="ff9acb721dfc14304ff8ab8854b7635299577061a7a0702a9066656ca67b8adb" Jan 31 16:58:12 crc kubenswrapper[4730]: E0131 16:58:12.101701 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff9acb721dfc14304ff8ab8854b7635299577061a7a0702a9066656ca67b8adb\": container with ID starting with ff9acb721dfc14304ff8ab8854b7635299577061a7a0702a9066656ca67b8adb not found: ID does not exist" containerID="ff9acb721dfc14304ff8ab8854b7635299577061a7a0702a9066656ca67b8adb" Jan 31 16:58:12 crc kubenswrapper[4730]: I0131 16:58:12.101735 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff9acb721dfc14304ff8ab8854b7635299577061a7a0702a9066656ca67b8adb"} err="failed to get container status \"ff9acb721dfc14304ff8ab8854b7635299577061a7a0702a9066656ca67b8adb\": rpc error: code = NotFound desc = could not find container \"ff9acb721dfc14304ff8ab8854b7635299577061a7a0702a9066656ca67b8adb\": container with ID starting with ff9acb721dfc14304ff8ab8854b7635299577061a7a0702a9066656ca67b8adb not found: ID does not exist" Jan 31 16:58:12 crc kubenswrapper[4730]: I0131 16:58:12.101754 4730 scope.go:117] "RemoveContainer" containerID="e4601a5792c12e06dc4ef1e38d6a87245402b38fe1ffbbe638453ae066c4faf7" Jan 31 16:58:12 crc kubenswrapper[4730]: E0131 16:58:12.102154 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4601a5792c12e06dc4ef1e38d6a87245402b38fe1ffbbe638453ae066c4faf7\": container with ID starting with e4601a5792c12e06dc4ef1e38d6a87245402b38fe1ffbbe638453ae066c4faf7 not found: ID does not exist" containerID="e4601a5792c12e06dc4ef1e38d6a87245402b38fe1ffbbe638453ae066c4faf7" Jan 31 16:58:12 crc kubenswrapper[4730]: I0131 16:58:12.102175 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4601a5792c12e06dc4ef1e38d6a87245402b38fe1ffbbe638453ae066c4faf7"} err="failed to get container status \"e4601a5792c12e06dc4ef1e38d6a87245402b38fe1ffbbe638453ae066c4faf7\": rpc error: code = NotFound desc = could not find container \"e4601a5792c12e06dc4ef1e38d6a87245402b38fe1ffbbe638453ae066c4faf7\": container with ID starting with e4601a5792c12e06dc4ef1e38d6a87245402b38fe1ffbbe638453ae066c4faf7 not found: ID does not exist" Jan 31 16:58:12 crc kubenswrapper[4730]: I0131 16:58:12.102189 4730 scope.go:117] "RemoveContainer" containerID="bdc2fec8fae53cd8acb372362ee426204e7a6466dda434ffeeae4b6506604cda" Jan 31 16:58:12 crc kubenswrapper[4730]: E0131 16:58:12.102494 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdc2fec8fae53cd8acb372362ee426204e7a6466dda434ffeeae4b6506604cda\": container with ID starting with bdc2fec8fae53cd8acb372362ee426204e7a6466dda434ffeeae4b6506604cda not found: ID does not exist" containerID="bdc2fec8fae53cd8acb372362ee426204e7a6466dda434ffeeae4b6506604cda" Jan 31 16:58:12 crc kubenswrapper[4730]: I0131 16:58:12.102525 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdc2fec8fae53cd8acb372362ee426204e7a6466dda434ffeeae4b6506604cda"} err="failed to get container status \"bdc2fec8fae53cd8acb372362ee426204e7a6466dda434ffeeae4b6506604cda\": rpc error: code = NotFound desc = could not find container \"bdc2fec8fae53cd8acb372362ee426204e7a6466dda434ffeeae4b6506604cda\": container with ID starting with bdc2fec8fae53cd8acb372362ee426204e7a6466dda434ffeeae4b6506604cda not found: ID does not exist" Jan 31 16:58:12 crc kubenswrapper[4730]: I0131 16:58:12.482943 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" path="/var/lib/kubelet/pods/a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed/volumes" Jan 31 16:58:18 crc kubenswrapper[4730]: I0131 16:58:18.464132 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:58:18 crc kubenswrapper[4730]: I0131 16:58:18.465917 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:58:18 crc kubenswrapper[4730]: E0131 16:58:18.466521 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:58:20 crc kubenswrapper[4730]: I0131 16:58:20.465237 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 16:58:20 crc kubenswrapper[4730]: I0131 16:58:20.465710 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 16:58:20 crc kubenswrapper[4730]: I0131 16:58:20.465922 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 16:58:20 crc kubenswrapper[4730]: E0131 16:58:20.466479 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:58:26 crc kubenswrapper[4730]: I0131 16:58:26.978192 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:58:26 crc kubenswrapper[4730]: I0131 16:58:26.978970 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:58:26 crc kubenswrapper[4730]: I0131 16:58:26.979042 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 16:58:26 crc kubenswrapper[4730]: I0131 16:58:26.980138 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d"} pod="openshift-machine-config-operator/machine-config-daemon-mzg47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 16:58:26 crc kubenswrapper[4730]: I0131 16:58:26.980221 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" containerID="cri-o://1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" gracePeriod=600 Jan 31 16:58:27 crc kubenswrapper[4730]: E0131 16:58:27.114301 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 16:58:27 crc kubenswrapper[4730]: I0131 16:58:27.167921 4730 generic.go:334] "Generic (PLEG): container finished" podID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" exitCode=0 Jan 31 16:58:27 crc kubenswrapper[4730]: I0131 16:58:27.168043 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerDied","Data":"1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d"} Jan 31 16:58:27 crc kubenswrapper[4730]: I0131 16:58:27.168143 4730 scope.go:117] "RemoveContainer" containerID="43b7bb63726524ca697f41266f3bd99562b62d62470c4a1e15fd3ef35c3d68d2" Jan 31 16:58:27 crc kubenswrapper[4730]: I0131 16:58:27.168788 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 16:58:27 crc kubenswrapper[4730]: E0131 16:58:27.169387 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 16:58:31 crc kubenswrapper[4730]: I0131 16:58:31.089654 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dvj5l"] Jan 31 16:58:31 crc kubenswrapper[4730]: I0131 16:58:31.104179 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dvj5l"] Jan 31 16:58:31 crc kubenswrapper[4730]: I0131 16:58:31.464558 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:58:31 crc kubenswrapper[4730]: I0131 16:58:31.464586 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:58:31 crc kubenswrapper[4730]: E0131 16:58:31.464889 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:58:32 crc kubenswrapper[4730]: I0131 16:58:32.480472 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05019b69-099e-4b89-b072-ea6b1f2019e3" path="/var/lib/kubelet/pods/05019b69-099e-4b89-b072-ea6b1f2019e3/volumes" Jan 31 16:58:34 crc kubenswrapper[4730]: I0131 16:58:34.469365 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 16:58:34 crc kubenswrapper[4730]: I0131 16:58:34.469702 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 16:58:34 crc kubenswrapper[4730]: I0131 16:58:34.469845 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 16:58:34 crc kubenswrapper[4730]: E0131 16:58:34.470236 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:58:40 crc kubenswrapper[4730]: I0131 16:58:40.866929 4730 scope.go:117] "RemoveContainer" containerID="7baa455b583bf8932473b82c023ae9a2b5b3176cf6c0c8036213ff16f646471b" Jan 31 16:58:40 crc kubenswrapper[4730]: I0131 16:58:40.914653 4730 scope.go:117] "RemoveContainer" containerID="969db365e982ca78a8b274abc63fb16baa4ac0310c2b7a6f82a570b4b8128bab" Jan 31 16:58:40 crc kubenswrapper[4730]: I0131 16:58:40.979306 4730 scope.go:117] "RemoveContainer" containerID="af49d33e2b53192a139e2aa279b7240d4161610f4bc8fe6866dadbaa822c8ede" Jan 31 16:58:41 crc kubenswrapper[4730]: I0131 16:58:41.009785 4730 scope.go:117] "RemoveContainer" containerID="a5db78471750fc731b4c8a042342459fc01a554a2b2f2aa60de2e00220da9925" Jan 31 16:58:41 crc kubenswrapper[4730]: I0131 16:58:41.043088 4730 scope.go:117] "RemoveContainer" containerID="5131ef244154a0a2e7c22c81b42de30262955196c77bfe00d7723e7fcde9b2a5" Jan 31 16:58:41 crc kubenswrapper[4730]: I0131 16:58:41.083086 4730 scope.go:117] "RemoveContainer" containerID="e9faa9b458cb34108e57efd0c24d388dbaa42765ffdbfb57bfabb208a5189567" Jan 31 16:58:41 crc kubenswrapper[4730]: I0131 16:58:41.136323 4730 scope.go:117] "RemoveContainer" containerID="c1dbdba61f3503c6ddaa4f2e3c04bddba0ce40a074719da2a57ebb8ff80b9ce9" Jan 31 16:58:42 crc kubenswrapper[4730]: I0131 16:58:42.468185 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 16:58:42 crc kubenswrapper[4730]: E0131 16:58:42.468869 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 16:58:44 crc kubenswrapper[4730]: I0131 16:58:44.466126 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:58:44 crc kubenswrapper[4730]: I0131 16:58:44.466355 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:58:44 crc kubenswrapper[4730]: E0131 16:58:44.646900 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:58:45 crc kubenswrapper[4730]: I0131 16:58:45.356189 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d"} Jan 31 16:58:45 crc kubenswrapper[4730]: I0131 16:58:45.357041 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:58:45 crc kubenswrapper[4730]: I0131 16:58:45.357595 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:58:45 crc kubenswrapper[4730]: E0131 16:58:45.358203 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:58:46 crc kubenswrapper[4730]: I0131 16:58:46.374142 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" exitCode=1 Jan 31 16:58:46 crc kubenswrapper[4730]: I0131 16:58:46.374205 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d"} Jan 31 16:58:46 crc kubenswrapper[4730]: I0131 16:58:46.374251 4730 scope.go:117] "RemoveContainer" containerID="9aba039b335082f6098f619a03c01bf7db37a7e6f292baf5a0af4cae48cf4383" Jan 31 16:58:46 crc kubenswrapper[4730]: I0131 16:58:46.375116 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:58:46 crc kubenswrapper[4730]: I0131 16:58:46.375148 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 16:58:46 crc kubenswrapper[4730]: E0131 16:58:46.375729 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:58:47 crc kubenswrapper[4730]: I0131 16:58:47.389037 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:58:47 crc kubenswrapper[4730]: I0131 16:58:47.389325 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 16:58:47 crc kubenswrapper[4730]: E0131 16:58:47.389615 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:58:47 crc kubenswrapper[4730]: I0131 16:58:47.464769 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 16:58:47 crc kubenswrapper[4730]: I0131 16:58:47.465061 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 16:58:47 crc kubenswrapper[4730]: I0131 16:58:47.465339 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 16:58:47 crc kubenswrapper[4730]: E0131 16:58:47.465931 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:58:48 crc kubenswrapper[4730]: I0131 16:58:48.653621 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:58:48 crc kubenswrapper[4730]: I0131 16:58:48.655175 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:58:48 crc kubenswrapper[4730]: I0131 16:58:48.655207 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 16:58:48 crc kubenswrapper[4730]: E0131 16:58:48.655734 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:58:55 crc kubenswrapper[4730]: I0131 16:58:55.046631 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-hbl4w"] Jan 31 16:58:55 crc kubenswrapper[4730]: I0131 16:58:55.052318 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-hbl4w"] Jan 31 16:58:56 crc kubenswrapper[4730]: I0131 16:58:56.044379 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-b9fwh"] Jan 31 16:58:56 crc kubenswrapper[4730]: I0131 16:58:56.057755 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-b9fwh"] Jan 31 16:58:56 crc kubenswrapper[4730]: I0131 16:58:56.467169 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 16:58:56 crc kubenswrapper[4730]: E0131 16:58:56.467684 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 16:58:56 crc kubenswrapper[4730]: I0131 16:58:56.480400 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b0bdf14-73a8-4d89-bdfe-b250d4b6a714" path="/var/lib/kubelet/pods/2b0bdf14-73a8-4d89-bdfe-b250d4b6a714/volumes" Jan 31 16:58:56 crc kubenswrapper[4730]: I0131 16:58:56.481305 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="638775e1-f41e-4dd4-a0b3-0a77ceccd15b" path="/var/lib/kubelet/pods/638775e1-f41e-4dd4-a0b3-0a77ceccd15b/volumes" Jan 31 16:59:00 crc kubenswrapper[4730]: I0131 16:59:00.464335 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:59:00 crc kubenswrapper[4730]: I0131 16:59:00.464764 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 16:59:00 crc kubenswrapper[4730]: I0131 16:59:00.465174 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 16:59:00 crc kubenswrapper[4730]: I0131 16:59:00.465272 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 16:59:00 crc kubenswrapper[4730]: I0131 16:59:00.465424 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 16:59:00 crc kubenswrapper[4730]: E0131 16:59:00.465854 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:59:00 crc kubenswrapper[4730]: E0131 16:59:00.666620 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:59:01 crc kubenswrapper[4730]: I0131 16:59:01.536973 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"33b8def0235b6db94fb9a78c4216680a0809cf947ad72647947c1ce808fd5f31"} Jan 31 16:59:01 crc kubenswrapper[4730]: I0131 16:59:01.537686 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:59:01 crc kubenswrapper[4730]: I0131 16:59:01.538230 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 16:59:01 crc kubenswrapper[4730]: E0131 16:59:01.538640 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:59:02 crc kubenswrapper[4730]: I0131 16:59:02.546507 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 16:59:02 crc kubenswrapper[4730]: E0131 16:59:02.547016 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:59:05 crc kubenswrapper[4730]: I0131 16:59:05.664861 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:59:06 crc kubenswrapper[4730]: I0131 16:59:06.658936 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:59:09 crc kubenswrapper[4730]: I0131 16:59:09.464633 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 16:59:09 crc kubenswrapper[4730]: E0131 16:59:09.465236 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 16:59:09 crc kubenswrapper[4730]: I0131 16:59:09.667897 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:59:10 crc kubenswrapper[4730]: I0131 16:59:10.663842 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:59:11 crc kubenswrapper[4730]: I0131 16:59:11.464557 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 16:59:11 crc kubenswrapper[4730]: I0131 16:59:11.464652 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 16:59:11 crc kubenswrapper[4730]: I0131 16:59:11.464776 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 16:59:11 crc kubenswrapper[4730]: E0131 16:59:11.465247 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:59:12 crc kubenswrapper[4730]: I0131 16:59:12.652425 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" exitCode=1 Jan 31 16:59:12 crc kubenswrapper[4730]: I0131 16:59:12.652485 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563"} Jan 31 16:59:12 crc kubenswrapper[4730]: I0131 16:59:12.653704 4730 scope.go:117] "RemoveContainer" containerID="76cd14f75be0a2e7271e97c2e84874497a20bad6efb9697ecd4ecf25b2af12cd" Jan 31 16:59:12 crc kubenswrapper[4730]: I0131 16:59:12.653836 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 16:59:12 crc kubenswrapper[4730]: I0131 16:59:12.653956 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 16:59:12 crc kubenswrapper[4730]: I0131 16:59:12.654075 4730 scope.go:117] "RemoveContainer" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" Jan 31 16:59:12 crc kubenswrapper[4730]: I0131 16:59:12.654109 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 16:59:12 crc kubenswrapper[4730]: E0131 16:59:12.654794 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:59:12 crc kubenswrapper[4730]: I0131 16:59:12.660412 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:59:12 crc kubenswrapper[4730]: I0131 16:59:12.660477 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:59:12 crc kubenswrapper[4730]: I0131 16:59:12.661219 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"33b8def0235b6db94fb9a78c4216680a0809cf947ad72647947c1ce808fd5f31"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 16:59:12 crc kubenswrapper[4730]: I0131 16:59:12.661249 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 16:59:12 crc kubenswrapper[4730]: I0131 16:59:12.661284 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://33b8def0235b6db94fb9a78c4216680a0809cf947ad72647947c1ce808fd5f31" gracePeriod=30 Jan 31 16:59:12 crc kubenswrapper[4730]: I0131 16:59:12.678928 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:59:12 crc kubenswrapper[4730]: E0131 16:59:12.994609 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:59:13 crc kubenswrapper[4730]: I0131 16:59:13.667245 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="33b8def0235b6db94fb9a78c4216680a0809cf947ad72647947c1ce808fd5f31" exitCode=0 Jan 31 16:59:13 crc kubenswrapper[4730]: I0131 16:59:13.667375 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"33b8def0235b6db94fb9a78c4216680a0809cf947ad72647947c1ce808fd5f31"} Jan 31 16:59:13 crc kubenswrapper[4730]: I0131 16:59:13.667662 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506"} Jan 31 16:59:13 crc kubenswrapper[4730]: I0131 16:59:13.667694 4730 scope.go:117] "RemoveContainer" containerID="4bfb877b95dc82dc02da3ae85f51eb736bc0a2d79d0775debc2a9ab10f7a5095" Jan 31 16:59:13 crc kubenswrapper[4730]: I0131 16:59:13.668605 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:59:13 crc kubenswrapper[4730]: I0131 16:59:13.668658 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 16:59:13 crc kubenswrapper[4730]: E0131 16:59:13.669095 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:59:14 crc kubenswrapper[4730]: I0131 16:59:14.693170 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 16:59:14 crc kubenswrapper[4730]: E0131 16:59:14.693506 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:59:18 crc kubenswrapper[4730]: I0131 16:59:18.666650 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:59:20 crc kubenswrapper[4730]: I0131 16:59:20.697487 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:59:21 crc kubenswrapper[4730]: I0131 16:59:21.664513 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:59:23 crc kubenswrapper[4730]: I0131 16:59:23.463920 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 16:59:23 crc kubenswrapper[4730]: E0131 16:59:23.464644 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 16:59:24 crc kubenswrapper[4730]: I0131 16:59:24.658022 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 16:59:24 crc kubenswrapper[4730]: I0131 16:59:24.658132 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 16:59:24 crc kubenswrapper[4730]: I0131 16:59:24.659417 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 16:59:24 crc kubenswrapper[4730]: I0131 16:59:24.659454 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 16:59:24 crc kubenswrapper[4730]: I0131 16:59:24.659498 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" gracePeriod=30 Jan 31 16:59:24 crc kubenswrapper[4730]: I0131 16:59:24.668557 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.176:8080/healthcheck\": EOF" Jan 31 16:59:24 crc kubenswrapper[4730]: I0131 16:59:24.968077 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" exitCode=0 Jan 31 16:59:24 crc kubenswrapper[4730]: I0131 16:59:24.968148 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506"} Jan 31 16:59:24 crc kubenswrapper[4730]: I0131 16:59:24.968397 4730 scope.go:117] "RemoveContainer" containerID="33b8def0235b6db94fb9a78c4216680a0809cf947ad72647947c1ce808fd5f31" Jan 31 16:59:25 crc kubenswrapper[4730]: I0131 16:59:25.120117 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:59:25 crc kubenswrapper[4730]: E0131 16:59:25.120339 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 16:59:25 crc kubenswrapper[4730]: E0131 16:59:25.120426 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 17:01:27.120408438 +0000 UTC m=+1873.926465354 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 16:59:25 crc kubenswrapper[4730]: E0131 16:59:25.281996 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:59:25 crc kubenswrapper[4730]: I0131 16:59:25.464440 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 16:59:25 crc kubenswrapper[4730]: I0131 16:59:25.464553 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 16:59:25 crc kubenswrapper[4730]: I0131 16:59:25.464682 4730 scope.go:117] "RemoveContainer" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" Jan 31 16:59:25 crc kubenswrapper[4730]: I0131 16:59:25.464697 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 16:59:25 crc kubenswrapper[4730]: E0131 16:59:25.465263 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:59:25 crc kubenswrapper[4730]: I0131 16:59:25.654346 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.176:8080/healthcheck\": dial tcp 10.217.0.176:8080: connect: connection refused" Jan 31 16:59:25 crc kubenswrapper[4730]: I0131 16:59:25.986907 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 16:59:25 crc kubenswrapper[4730]: I0131 16:59:25.987207 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 16:59:25 crc kubenswrapper[4730]: E0131 16:59:25.987421 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:59:29 crc kubenswrapper[4730]: E0131 16:59:29.611575 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 16:59:30 crc kubenswrapper[4730]: I0131 16:59:30.018653 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 16:59:37 crc kubenswrapper[4730]: I0131 16:59:37.465061 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 16:59:37 crc kubenswrapper[4730]: I0131 16:59:37.465638 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 16:59:37 crc kubenswrapper[4730]: I0131 16:59:37.465714 4730 scope.go:117] "RemoveContainer" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" Jan 31 16:59:37 crc kubenswrapper[4730]: I0131 16:59:37.465722 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 16:59:37 crc kubenswrapper[4730]: E0131 16:59:37.466028 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:59:38 crc kubenswrapper[4730]: I0131 16:59:38.465120 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 16:59:38 crc kubenswrapper[4730]: E0131 16:59:38.465594 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 16:59:39 crc kubenswrapper[4730]: I0131 16:59:39.068794 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-tf7gr"] Jan 31 16:59:39 crc kubenswrapper[4730]: I0131 16:59:39.105537 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-tf7gr"] Jan 31 16:59:39 crc kubenswrapper[4730]: I0131 16:59:39.465001 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 16:59:39 crc kubenswrapper[4730]: I0131 16:59:39.465281 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 16:59:39 crc kubenswrapper[4730]: E0131 16:59:39.465581 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 16:59:40 crc kubenswrapper[4730]: I0131 16:59:40.475246 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f176fb26-f0f7-4a29-9963-d1e2d27805e2" path="/var/lib/kubelet/pods/f176fb26-f0f7-4a29-9963-d1e2d27805e2/volumes" Jan 31 16:59:41 crc kubenswrapper[4730]: I0131 16:59:41.321758 4730 scope.go:117] "RemoveContainer" containerID="b412524c906028320ad3e4ff45adbedff39a1d3e9259b1050ba25cc562ed465e" Jan 31 16:59:41 crc kubenswrapper[4730]: I0131 16:59:41.381049 4730 scope.go:117] "RemoveContainer" containerID="ddf97d903b360d8d8e881549e0bc9e812fb3a927b2fd766ecb3ce83d053ebff4" Jan 31 16:59:41 crc kubenswrapper[4730]: I0131 16:59:41.457027 4730 scope.go:117] "RemoveContainer" containerID="9b0114ec1e0ac2a3934568aeddd701539ff88ff97dad0e68c7f0988adc8c7474" Jan 31 16:59:50 crc kubenswrapper[4730]: I0131 16:59:50.463877 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 16:59:50 crc kubenswrapper[4730]: I0131 16:59:50.464454 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 16:59:50 crc kubenswrapper[4730]: I0131 16:59:50.464576 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 16:59:50 crc kubenswrapper[4730]: E0131 16:59:50.464638 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 16:59:50 crc kubenswrapper[4730]: I0131 16:59:50.464703 4730 scope.go:117] "RemoveContainer" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" Jan 31 16:59:50 crc kubenswrapper[4730]: I0131 16:59:50.464732 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 16:59:50 crc kubenswrapper[4730]: E0131 16:59:50.465236 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 16:59:54 crc kubenswrapper[4730]: I0131 16:59:54.473473 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 16:59:54 crc kubenswrapper[4730]: I0131 16:59:54.474013 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 16:59:54 crc kubenswrapper[4730]: E0131 16:59:54.474613 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.180449 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4"] Jan 31 17:00:00 crc kubenswrapper[4730]: E0131 17:00:00.181446 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" containerName="extract-utilities" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.181468 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" containerName="extract-utilities" Jan 31 17:00:00 crc kubenswrapper[4730]: E0131 17:00:00.181510 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" containerName="registry-server" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.181522 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" containerName="registry-server" Jan 31 17:00:00 crc kubenswrapper[4730]: E0131 17:00:00.181548 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" containerName="extract-content" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.181561 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" containerName="extract-content" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.181903 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a53c6e8a-dd2b-4e44-90ef-07d0ab5719ed" containerName="registry-server" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.182875 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.188396 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.189218 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.206696 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4"] Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.212114 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-config-volume\") pod \"collect-profiles-29497980-qbjt4\" (UID: \"8f934b7a-a8bb-4e23-97a7-2aecdae4024f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.212220 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xhf6\" (UniqueName: \"kubernetes.io/projected/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-kube-api-access-8xhf6\") pod \"collect-profiles-29497980-qbjt4\" (UID: \"8f934b7a-a8bb-4e23-97a7-2aecdae4024f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.212288 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-secret-volume\") pod \"collect-profiles-29497980-qbjt4\" (UID: \"8f934b7a-a8bb-4e23-97a7-2aecdae4024f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.314467 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-config-volume\") pod \"collect-profiles-29497980-qbjt4\" (UID: \"8f934b7a-a8bb-4e23-97a7-2aecdae4024f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.314549 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xhf6\" (UniqueName: \"kubernetes.io/projected/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-kube-api-access-8xhf6\") pod \"collect-profiles-29497980-qbjt4\" (UID: \"8f934b7a-a8bb-4e23-97a7-2aecdae4024f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.314589 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-secret-volume\") pod \"collect-profiles-29497980-qbjt4\" (UID: \"8f934b7a-a8bb-4e23-97a7-2aecdae4024f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.316411 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-config-volume\") pod \"collect-profiles-29497980-qbjt4\" (UID: \"8f934b7a-a8bb-4e23-97a7-2aecdae4024f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.320675 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-secret-volume\") pod \"collect-profiles-29497980-qbjt4\" (UID: \"8f934b7a-a8bb-4e23-97a7-2aecdae4024f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.334531 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xhf6\" (UniqueName: \"kubernetes.io/projected/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-kube-api-access-8xhf6\") pod \"collect-profiles-29497980-qbjt4\" (UID: \"8f934b7a-a8bb-4e23-97a7-2aecdae4024f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.527067 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" Jan 31 17:00:00 crc kubenswrapper[4730]: I0131 17:00:00.804847 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4"] Jan 31 17:00:01 crc kubenswrapper[4730]: I0131 17:00:01.342153 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" event={"ID":"8f934b7a-a8bb-4e23-97a7-2aecdae4024f","Type":"ContainerStarted","Data":"4098c7309001dd6f8674d3f2d3df3a27a876f9157fc11e6db98ccf39d099cc1d"} Jan 31 17:00:01 crc kubenswrapper[4730]: I0131 17:00:01.342637 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" event={"ID":"8f934b7a-a8bb-4e23-97a7-2aecdae4024f","Type":"ContainerStarted","Data":"1cd147f4a35f6e04ff5c340f19c9385c541dc36c1cb6a9af12ce335b4050b9f8"} Jan 31 17:00:01 crc kubenswrapper[4730]: I0131 17:00:01.369413 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" podStartSLOduration=1.369392511 podStartE2EDuration="1.369392511s" podCreationTimestamp="2026-01-31 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 17:00:01.363224348 +0000 UTC m=+1788.169281264" watchObservedRunningTime="2026-01-31 17:00:01.369392511 +0000 UTC m=+1788.175449427" Jan 31 17:00:02 crc kubenswrapper[4730]: I0131 17:00:02.377578 4730 generic.go:334] "Generic (PLEG): container finished" podID="8f934b7a-a8bb-4e23-97a7-2aecdae4024f" containerID="4098c7309001dd6f8674d3f2d3df3a27a876f9157fc11e6db98ccf39d099cc1d" exitCode=0 Jan 31 17:00:02 crc kubenswrapper[4730]: I0131 17:00:02.377628 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" event={"ID":"8f934b7a-a8bb-4e23-97a7-2aecdae4024f","Type":"ContainerDied","Data":"4098c7309001dd6f8674d3f2d3df3a27a876f9157fc11e6db98ccf39d099cc1d"} Jan 31 17:00:03 crc kubenswrapper[4730]: I0131 17:00:03.681640 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" Jan 31 17:00:03 crc kubenswrapper[4730]: I0131 17:00:03.801915 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-secret-volume\") pod \"8f934b7a-a8bb-4e23-97a7-2aecdae4024f\" (UID: \"8f934b7a-a8bb-4e23-97a7-2aecdae4024f\") " Jan 31 17:00:03 crc kubenswrapper[4730]: I0131 17:00:03.802061 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-config-volume\") pod \"8f934b7a-a8bb-4e23-97a7-2aecdae4024f\" (UID: \"8f934b7a-a8bb-4e23-97a7-2aecdae4024f\") " Jan 31 17:00:03 crc kubenswrapper[4730]: I0131 17:00:03.802660 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-config-volume" (OuterVolumeSpecName: "config-volume") pod "8f934b7a-a8bb-4e23-97a7-2aecdae4024f" (UID: "8f934b7a-a8bb-4e23-97a7-2aecdae4024f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 17:00:03 crc kubenswrapper[4730]: I0131 17:00:03.802860 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xhf6\" (UniqueName: \"kubernetes.io/projected/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-kube-api-access-8xhf6\") pod \"8f934b7a-a8bb-4e23-97a7-2aecdae4024f\" (UID: \"8f934b7a-a8bb-4e23-97a7-2aecdae4024f\") " Jan 31 17:00:03 crc kubenswrapper[4730]: I0131 17:00:03.803260 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 17:00:03 crc kubenswrapper[4730]: I0131 17:00:03.807943 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-kube-api-access-8xhf6" (OuterVolumeSpecName: "kube-api-access-8xhf6") pod "8f934b7a-a8bb-4e23-97a7-2aecdae4024f" (UID: "8f934b7a-a8bb-4e23-97a7-2aecdae4024f"). InnerVolumeSpecName "kube-api-access-8xhf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 17:00:03 crc kubenswrapper[4730]: I0131 17:00:03.808301 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8f934b7a-a8bb-4e23-97a7-2aecdae4024f" (UID: "8f934b7a-a8bb-4e23-97a7-2aecdae4024f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 17:00:03 crc kubenswrapper[4730]: I0131 17:00:03.905484 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xhf6\" (UniqueName: \"kubernetes.io/projected/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-kube-api-access-8xhf6\") on node \"crc\" DevicePath \"\"" Jan 31 17:00:03 crc kubenswrapper[4730]: I0131 17:00:03.905513 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f934b7a-a8bb-4e23-97a7-2aecdae4024f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 17:00:04 crc kubenswrapper[4730]: I0131 17:00:04.398759 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" event={"ID":"8f934b7a-a8bb-4e23-97a7-2aecdae4024f","Type":"ContainerDied","Data":"1cd147f4a35f6e04ff5c340f19c9385c541dc36c1cb6a9af12ce335b4050b9f8"} Jan 31 17:00:04 crc kubenswrapper[4730]: I0131 17:00:04.398785 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497980-qbjt4" Jan 31 17:00:04 crc kubenswrapper[4730]: I0131 17:00:04.398797 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cd147f4a35f6e04ff5c340f19c9385c541dc36c1cb6a9af12ce335b4050b9f8" Jan 31 17:00:04 crc kubenswrapper[4730]: I0131 17:00:04.466669 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 17:00:04 crc kubenswrapper[4730]: I0131 17:00:04.466757 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 17:00:04 crc kubenswrapper[4730]: I0131 17:00:04.466930 4730 scope.go:117] "RemoveContainer" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" Jan 31 17:00:04 crc kubenswrapper[4730]: I0131 17:00:04.466942 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 17:00:04 crc kubenswrapper[4730]: E0131 17:00:04.467324 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:00:04 crc kubenswrapper[4730]: I0131 17:00:04.467563 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:00:04 crc kubenswrapper[4730]: E0131 17:00:04.468086 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:00:09 crc kubenswrapper[4730]: I0131 17:00:09.466821 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:00:09 crc kubenswrapper[4730]: I0131 17:00:09.467414 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:00:09 crc kubenswrapper[4730]: E0131 17:00:09.469831 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:00:15 crc kubenswrapper[4730]: I0131 17:00:15.464866 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 17:00:15 crc kubenswrapper[4730]: I0131 17:00:15.465437 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 17:00:15 crc kubenswrapper[4730]: I0131 17:00:15.465516 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:00:15 crc kubenswrapper[4730]: I0131 17:00:15.465663 4730 scope.go:117] "RemoveContainer" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" Jan 31 17:00:15 crc kubenswrapper[4730]: I0131 17:00:15.465677 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 17:00:15 crc kubenswrapper[4730]: E0131 17:00:15.466012 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:00:15 crc kubenswrapper[4730]: E0131 17:00:15.466151 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:00:18 crc kubenswrapper[4730]: I0131 17:00:18.555151 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="17f7a33830c8777b805c6edba65283177f0229b21a224ed3b5e8e58184905db3" exitCode=1 Jan 31 17:00:18 crc kubenswrapper[4730]: I0131 17:00:18.555828 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"17f7a33830c8777b805c6edba65283177f0229b21a224ed3b5e8e58184905db3"} Jan 31 17:00:18 crc kubenswrapper[4730]: I0131 17:00:18.555881 4730 scope.go:117] "RemoveContainer" containerID="1d8a11eb2b8f06bb45046cdfdf9dfba7a50149ebc83464b60ce68e56a82386d9" Jan 31 17:00:18 crc kubenswrapper[4730]: I0131 17:00:18.557453 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 17:00:18 crc kubenswrapper[4730]: I0131 17:00:18.557586 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 17:00:18 crc kubenswrapper[4730]: I0131 17:00:18.557641 4730 scope.go:117] "RemoveContainer" containerID="17f7a33830c8777b805c6edba65283177f0229b21a224ed3b5e8e58184905db3" Jan 31 17:00:18 crc kubenswrapper[4730]: I0131 17:00:18.557762 4730 scope.go:117] "RemoveContainer" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" Jan 31 17:00:18 crc kubenswrapper[4730]: I0131 17:00:18.557776 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 17:00:18 crc kubenswrapper[4730]: E0131 17:00:18.563154 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:00:24 crc kubenswrapper[4730]: I0131 17:00:24.474147 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:00:24 crc kubenswrapper[4730]: I0131 17:00:24.475033 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:00:24 crc kubenswrapper[4730]: E0131 17:00:24.475497 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:00:30 crc kubenswrapper[4730]: I0131 17:00:30.465372 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 17:00:30 crc kubenswrapper[4730]: I0131 17:00:30.466219 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 17:00:30 crc kubenswrapper[4730]: I0131 17:00:30.466247 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:00:30 crc kubenswrapper[4730]: I0131 17:00:30.466366 4730 scope.go:117] "RemoveContainer" containerID="17f7a33830c8777b805c6edba65283177f0229b21a224ed3b5e8e58184905db3" Jan 31 17:00:30 crc kubenswrapper[4730]: I0131 17:00:30.466476 4730 scope.go:117] "RemoveContainer" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" Jan 31 17:00:30 crc kubenswrapper[4730]: I0131 17:00:30.466489 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 17:00:30 crc kubenswrapper[4730]: E0131 17:00:30.466753 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:00:30 crc kubenswrapper[4730]: E0131 17:00:30.467234 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:00:37 crc kubenswrapper[4730]: I0131 17:00:37.464348 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:00:37 crc kubenswrapper[4730]: I0131 17:00:37.464879 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:00:37 crc kubenswrapper[4730]: E0131 17:00:37.465116 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:00:43 crc kubenswrapper[4730]: I0131 17:00:43.465184 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 17:00:43 crc kubenswrapper[4730]: I0131 17:00:43.465715 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 17:00:43 crc kubenswrapper[4730]: I0131 17:00:43.465741 4730 scope.go:117] "RemoveContainer" containerID="17f7a33830c8777b805c6edba65283177f0229b21a224ed3b5e8e58184905db3" Jan 31 17:00:43 crc kubenswrapper[4730]: I0131 17:00:43.465797 4730 scope.go:117] "RemoveContainer" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" Jan 31 17:00:43 crc kubenswrapper[4730]: I0131 17:00:43.465825 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 17:00:43 crc kubenswrapper[4730]: E0131 17:00:43.466172 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:00:43 crc kubenswrapper[4730]: I0131 17:00:43.467335 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:00:43 crc kubenswrapper[4730]: E0131 17:00:43.467775 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:00:51 crc kubenswrapper[4730]: I0131 17:00:51.463845 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:00:51 crc kubenswrapper[4730]: I0131 17:00:51.464379 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:00:51 crc kubenswrapper[4730]: E0131 17:00:51.464584 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:00:55 crc kubenswrapper[4730]: I0131 17:00:55.465439 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 17:00:55 crc kubenswrapper[4730]: I0131 17:00:55.465861 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 17:00:55 crc kubenswrapper[4730]: I0131 17:00:55.465891 4730 scope.go:117] "RemoveContainer" containerID="17f7a33830c8777b805c6edba65283177f0229b21a224ed3b5e8e58184905db3" Jan 31 17:00:55 crc kubenswrapper[4730]: I0131 17:00:55.465951 4730 scope.go:117] "RemoveContainer" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" Jan 31 17:00:55 crc kubenswrapper[4730]: I0131 17:00:55.465962 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 17:00:55 crc kubenswrapper[4730]: E0131 17:00:55.466378 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:00:58 crc kubenswrapper[4730]: I0131 17:00:58.464998 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:00:58 crc kubenswrapper[4730]: E0131 17:00:58.466166 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.173385 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29497981-dss2z"] Jan 31 17:01:00 crc kubenswrapper[4730]: E0131 17:01:00.174044 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f934b7a-a8bb-4e23-97a7-2aecdae4024f" containerName="collect-profiles" Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.174067 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f934b7a-a8bb-4e23-97a7-2aecdae4024f" containerName="collect-profiles" Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.175147 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f934b7a-a8bb-4e23-97a7-2aecdae4024f" containerName="collect-profiles" Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.176462 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497981-dss2z" Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.203034 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497981-dss2z"] Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.272229 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-combined-ca-bundle\") pod \"keystone-cron-29497981-dss2z\" (UID: \"e2480e28-9925-4151-90a2-8db7d28e20f3\") " pod="openstack/keystone-cron-29497981-dss2z" Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.272617 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5mvl\" (UniqueName: \"kubernetes.io/projected/e2480e28-9925-4151-90a2-8db7d28e20f3-kube-api-access-w5mvl\") pod \"keystone-cron-29497981-dss2z\" (UID: \"e2480e28-9925-4151-90a2-8db7d28e20f3\") " pod="openstack/keystone-cron-29497981-dss2z" Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.272783 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-fernet-keys\") pod \"keystone-cron-29497981-dss2z\" (UID: \"e2480e28-9925-4151-90a2-8db7d28e20f3\") " pod="openstack/keystone-cron-29497981-dss2z" Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.273439 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-config-data\") pod \"keystone-cron-29497981-dss2z\" (UID: \"e2480e28-9925-4151-90a2-8db7d28e20f3\") " pod="openstack/keystone-cron-29497981-dss2z" Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.375589 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5mvl\" (UniqueName: \"kubernetes.io/projected/e2480e28-9925-4151-90a2-8db7d28e20f3-kube-api-access-w5mvl\") pod \"keystone-cron-29497981-dss2z\" (UID: \"e2480e28-9925-4151-90a2-8db7d28e20f3\") " pod="openstack/keystone-cron-29497981-dss2z" Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.375668 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-fernet-keys\") pod \"keystone-cron-29497981-dss2z\" (UID: \"e2480e28-9925-4151-90a2-8db7d28e20f3\") " pod="openstack/keystone-cron-29497981-dss2z" Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.375741 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-config-data\") pod \"keystone-cron-29497981-dss2z\" (UID: \"e2480e28-9925-4151-90a2-8db7d28e20f3\") " pod="openstack/keystone-cron-29497981-dss2z" Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.375856 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-combined-ca-bundle\") pod \"keystone-cron-29497981-dss2z\" (UID: \"e2480e28-9925-4151-90a2-8db7d28e20f3\") " pod="openstack/keystone-cron-29497981-dss2z" Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.383626 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-combined-ca-bundle\") pod \"keystone-cron-29497981-dss2z\" (UID: \"e2480e28-9925-4151-90a2-8db7d28e20f3\") " pod="openstack/keystone-cron-29497981-dss2z" Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.385219 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-fernet-keys\") pod \"keystone-cron-29497981-dss2z\" (UID: \"e2480e28-9925-4151-90a2-8db7d28e20f3\") " pod="openstack/keystone-cron-29497981-dss2z" Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.385758 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-config-data\") pod \"keystone-cron-29497981-dss2z\" (UID: \"e2480e28-9925-4151-90a2-8db7d28e20f3\") " pod="openstack/keystone-cron-29497981-dss2z" Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.396565 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5mvl\" (UniqueName: \"kubernetes.io/projected/e2480e28-9925-4151-90a2-8db7d28e20f3-kube-api-access-w5mvl\") pod \"keystone-cron-29497981-dss2z\" (UID: \"e2480e28-9925-4151-90a2-8db7d28e20f3\") " pod="openstack/keystone-cron-29497981-dss2z" Jan 31 17:01:00 crc kubenswrapper[4730]: I0131 17:01:00.533970 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497981-dss2z" Jan 31 17:01:01 crc kubenswrapper[4730]: I0131 17:01:01.038848 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497981-dss2z"] Jan 31 17:01:01 crc kubenswrapper[4730]: I0131 17:01:01.992122 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497981-dss2z" event={"ID":"e2480e28-9925-4151-90a2-8db7d28e20f3","Type":"ContainerStarted","Data":"fb6a1de3463c91a2a1387f4f0dcb0fb30becb992d83ad2aca544ece1c75abad4"} Jan 31 17:01:01 crc kubenswrapper[4730]: I0131 17:01:01.992376 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497981-dss2z" event={"ID":"e2480e28-9925-4151-90a2-8db7d28e20f3","Type":"ContainerStarted","Data":"f311d09b13da7960544846b0f605cce35fe32fd973548044d32f6e87f307f97c"} Jan 31 17:01:02 crc kubenswrapper[4730]: I0131 17:01:02.015519 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29497981-dss2z" podStartSLOduration=2.015498146 podStartE2EDuration="2.015498146s" podCreationTimestamp="2026-01-31 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 17:01:02.007383899 +0000 UTC m=+1848.813440825" watchObservedRunningTime="2026-01-31 17:01:02.015498146 +0000 UTC m=+1848.821555072" Jan 31 17:01:03 crc kubenswrapper[4730]: I0131 17:01:03.465196 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:01:03 crc kubenswrapper[4730]: I0131 17:01:03.465549 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:01:03 crc kubenswrapper[4730]: E0131 17:01:03.465955 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:01:05 crc kubenswrapper[4730]: I0131 17:01:05.031524 4730 generic.go:334] "Generic (PLEG): container finished" podID="e2480e28-9925-4151-90a2-8db7d28e20f3" containerID="fb6a1de3463c91a2a1387f4f0dcb0fb30becb992d83ad2aca544ece1c75abad4" exitCode=0 Jan 31 17:01:05 crc kubenswrapper[4730]: I0131 17:01:05.031619 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497981-dss2z" event={"ID":"e2480e28-9925-4151-90a2-8db7d28e20f3","Type":"ContainerDied","Data":"fb6a1de3463c91a2a1387f4f0dcb0fb30becb992d83ad2aca544ece1c75abad4"} Jan 31 17:01:06 crc kubenswrapper[4730]: I0131 17:01:06.385726 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497981-dss2z" Jan 31 17:01:06 crc kubenswrapper[4730]: I0131 17:01:06.504889 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-fernet-keys\") pod \"e2480e28-9925-4151-90a2-8db7d28e20f3\" (UID: \"e2480e28-9925-4151-90a2-8db7d28e20f3\") " Jan 31 17:01:06 crc kubenswrapper[4730]: I0131 17:01:06.505291 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-config-data\") pod \"e2480e28-9925-4151-90a2-8db7d28e20f3\" (UID: \"e2480e28-9925-4151-90a2-8db7d28e20f3\") " Jan 31 17:01:06 crc kubenswrapper[4730]: I0131 17:01:06.505441 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-combined-ca-bundle\") pod \"e2480e28-9925-4151-90a2-8db7d28e20f3\" (UID: \"e2480e28-9925-4151-90a2-8db7d28e20f3\") " Jan 31 17:01:06 crc kubenswrapper[4730]: I0131 17:01:06.505550 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5mvl\" (UniqueName: \"kubernetes.io/projected/e2480e28-9925-4151-90a2-8db7d28e20f3-kube-api-access-w5mvl\") pod \"e2480e28-9925-4151-90a2-8db7d28e20f3\" (UID: \"e2480e28-9925-4151-90a2-8db7d28e20f3\") " Jan 31 17:01:06 crc kubenswrapper[4730]: I0131 17:01:06.519012 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e2480e28-9925-4151-90a2-8db7d28e20f3" (UID: "e2480e28-9925-4151-90a2-8db7d28e20f3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 17:01:06 crc kubenswrapper[4730]: I0131 17:01:06.519164 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2480e28-9925-4151-90a2-8db7d28e20f3-kube-api-access-w5mvl" (OuterVolumeSpecName: "kube-api-access-w5mvl") pod "e2480e28-9925-4151-90a2-8db7d28e20f3" (UID: "e2480e28-9925-4151-90a2-8db7d28e20f3"). InnerVolumeSpecName "kube-api-access-w5mvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 17:01:06 crc kubenswrapper[4730]: I0131 17:01:06.533564 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2480e28-9925-4151-90a2-8db7d28e20f3" (UID: "e2480e28-9925-4151-90a2-8db7d28e20f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 17:01:06 crc kubenswrapper[4730]: I0131 17:01:06.569343 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-config-data" (OuterVolumeSpecName: "config-data") pod "e2480e28-9925-4151-90a2-8db7d28e20f3" (UID: "e2480e28-9925-4151-90a2-8db7d28e20f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 17:01:06 crc kubenswrapper[4730]: I0131 17:01:06.608674 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 17:01:06 crc kubenswrapper[4730]: I0131 17:01:06.608707 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5mvl\" (UniqueName: \"kubernetes.io/projected/e2480e28-9925-4151-90a2-8db7d28e20f3-kube-api-access-w5mvl\") on node \"crc\" DevicePath \"\"" Jan 31 17:01:06 crc kubenswrapper[4730]: I0131 17:01:06.608723 4730 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 17:01:06 crc kubenswrapper[4730]: I0131 17:01:06.608735 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2480e28-9925-4151-90a2-8db7d28e20f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 17:01:07 crc kubenswrapper[4730]: I0131 17:01:07.055133 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497981-dss2z" event={"ID":"e2480e28-9925-4151-90a2-8db7d28e20f3","Type":"ContainerDied","Data":"f311d09b13da7960544846b0f605cce35fe32fd973548044d32f6e87f307f97c"} Jan 31 17:01:07 crc kubenswrapper[4730]: I0131 17:01:07.055171 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f311d09b13da7960544846b0f605cce35fe32fd973548044d32f6e87f307f97c" Jan 31 17:01:07 crc kubenswrapper[4730]: I0131 17:01:07.055296 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497981-dss2z" Jan 31 17:01:09 crc kubenswrapper[4730]: I0131 17:01:09.464186 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:01:09 crc kubenswrapper[4730]: I0131 17:01:09.464852 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 17:01:09 crc kubenswrapper[4730]: I0131 17:01:09.464992 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 17:01:09 crc kubenswrapper[4730]: I0131 17:01:09.465030 4730 scope.go:117] "RemoveContainer" containerID="17f7a33830c8777b805c6edba65283177f0229b21a224ed3b5e8e58184905db3" Jan 31 17:01:09 crc kubenswrapper[4730]: E0131 17:01:09.465084 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:01:09 crc kubenswrapper[4730]: I0131 17:01:09.465101 4730 scope.go:117] "RemoveContainer" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" Jan 31 17:01:09 crc kubenswrapper[4730]: I0131 17:01:09.465124 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 17:01:09 crc kubenswrapper[4730]: E0131 17:01:09.465447 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:01:14 crc kubenswrapper[4730]: I0131 17:01:14.467298 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:01:14 crc kubenswrapper[4730]: I0131 17:01:14.467751 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:01:14 crc kubenswrapper[4730]: E0131 17:01:14.467980 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:01:20 crc kubenswrapper[4730]: I0131 17:01:20.465735 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 17:01:20 crc kubenswrapper[4730]: I0131 17:01:20.466553 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 17:01:20 crc kubenswrapper[4730]: I0131 17:01:20.466600 4730 scope.go:117] "RemoveContainer" containerID="17f7a33830c8777b805c6edba65283177f0229b21a224ed3b5e8e58184905db3" Jan 31 17:01:20 crc kubenswrapper[4730]: I0131 17:01:20.466702 4730 scope.go:117] "RemoveContainer" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" Jan 31 17:01:20 crc kubenswrapper[4730]: I0131 17:01:20.466717 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 17:01:20 crc kubenswrapper[4730]: E0131 17:01:20.467427 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:01:23 crc kubenswrapper[4730]: I0131 17:01:23.465414 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:01:23 crc kubenswrapper[4730]: E0131 17:01:23.466384 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:01:27 crc kubenswrapper[4730]: I0131 17:01:27.169569 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:01:27 crc kubenswrapper[4730]: E0131 17:01:27.169791 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 17:01:27 crc kubenswrapper[4730]: E0131 17:01:27.170416 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 17:03:29.170389852 +0000 UTC m=+1995.976446808 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 17:01:28 crc kubenswrapper[4730]: I0131 17:01:28.466158 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:01:28 crc kubenswrapper[4730]: I0131 17:01:28.466188 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:01:28 crc kubenswrapper[4730]: E0131 17:01:28.466457 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:01:31 crc kubenswrapper[4730]: I0131 17:01:31.467284 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 17:01:31 crc kubenswrapper[4730]: I0131 17:01:31.468118 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 17:01:31 crc kubenswrapper[4730]: I0131 17:01:31.468164 4730 scope.go:117] "RemoveContainer" containerID="17f7a33830c8777b805c6edba65283177f0229b21a224ed3b5e8e58184905db3" Jan 31 17:01:31 crc kubenswrapper[4730]: I0131 17:01:31.468260 4730 scope.go:117] "RemoveContainer" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" Jan 31 17:01:31 crc kubenswrapper[4730]: I0131 17:01:31.468275 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 17:01:31 crc kubenswrapper[4730]: E0131 17:01:31.469709 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:01:33 crc kubenswrapper[4730]: E0131 17:01:33.020282 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 17:01:33 crc kubenswrapper[4730]: I0131 17:01:33.280143 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:01:35 crc kubenswrapper[4730]: I0131 17:01:35.464248 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:01:35 crc kubenswrapper[4730]: E0131 17:01:35.464735 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:01:41 crc kubenswrapper[4730]: I0131 17:01:41.464695 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:01:41 crc kubenswrapper[4730]: I0131 17:01:41.465258 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:01:41 crc kubenswrapper[4730]: E0131 17:01:41.465494 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:01:43 crc kubenswrapper[4730]: I0131 17:01:43.465041 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 17:01:43 crc kubenswrapper[4730]: I0131 17:01:43.465468 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 17:01:43 crc kubenswrapper[4730]: I0131 17:01:43.465502 4730 scope.go:117] "RemoveContainer" containerID="17f7a33830c8777b805c6edba65283177f0229b21a224ed3b5e8e58184905db3" Jan 31 17:01:43 crc kubenswrapper[4730]: I0131 17:01:43.465568 4730 scope.go:117] "RemoveContainer" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" Jan 31 17:01:43 crc kubenswrapper[4730]: I0131 17:01:43.465579 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 17:01:43 crc kubenswrapper[4730]: E0131 17:01:43.674040 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:01:44 crc kubenswrapper[4730]: I0131 17:01:44.368113 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6"} Jan 31 17:01:44 crc kubenswrapper[4730]: I0131 17:01:44.368865 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 17:01:44 crc kubenswrapper[4730]: I0131 17:01:44.368921 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 17:01:44 crc kubenswrapper[4730]: I0131 17:01:44.368998 4730 scope.go:117] "RemoveContainer" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" Jan 31 17:01:44 crc kubenswrapper[4730]: I0131 17:01:44.369005 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 17:01:44 crc kubenswrapper[4730]: E0131 17:01:44.369314 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:01:46 crc kubenswrapper[4730]: I0131 17:01:46.464590 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:01:46 crc kubenswrapper[4730]: E0131 17:01:46.465057 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:01:52 crc kubenswrapper[4730]: I0131 17:01:52.465005 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:01:52 crc kubenswrapper[4730]: I0131 17:01:52.465519 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:01:52 crc kubenswrapper[4730]: E0131 17:01:52.465870 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:01:55 crc kubenswrapper[4730]: I0131 17:01:55.466879 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 17:01:55 crc kubenswrapper[4730]: I0131 17:01:55.467507 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 17:01:55 crc kubenswrapper[4730]: I0131 17:01:55.467714 4730 scope.go:117] "RemoveContainer" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" Jan 31 17:01:55 crc kubenswrapper[4730]: I0131 17:01:55.467736 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 17:01:55 crc kubenswrapper[4730]: E0131 17:01:55.688274 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:01:56 crc kubenswrapper[4730]: I0131 17:01:56.495484 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601"} Jan 31 17:01:56 crc kubenswrapper[4730]: I0131 17:01:56.497001 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 17:01:56 crc kubenswrapper[4730]: I0131 17:01:56.497120 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 17:01:56 crc kubenswrapper[4730]: I0131 17:01:56.497308 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 17:01:56 crc kubenswrapper[4730]: E0131 17:01:56.497845 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:01:57 crc kubenswrapper[4730]: I0131 17:01:57.465296 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:01:57 crc kubenswrapper[4730]: E0131 17:01:57.465608 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:02:04 crc kubenswrapper[4730]: I0131 17:02:04.469348 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:02:04 crc kubenswrapper[4730]: I0131 17:02:04.469646 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:02:04 crc kubenswrapper[4730]: E0131 17:02:04.469903 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:02:08 crc kubenswrapper[4730]: I0131 17:02:08.464634 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:02:08 crc kubenswrapper[4730]: E0131 17:02:08.465435 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:02:08 crc kubenswrapper[4730]: I0131 17:02:08.466694 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 17:02:08 crc kubenswrapper[4730]: I0131 17:02:08.466907 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 17:02:08 crc kubenswrapper[4730]: I0131 17:02:08.467245 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 17:02:09 crc kubenswrapper[4730]: I0131 17:02:09.606821 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e"} Jan 31 17:02:09 crc kubenswrapper[4730]: I0131 17:02:09.606793 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" exitCode=1 Jan 31 17:02:09 crc kubenswrapper[4730]: I0131 17:02:09.607975 4730 scope.go:117] "RemoveContainer" containerID="7261861bfbcad8eec1abdd8dd6d21954f649c1f015fc49c76ea1ae3e51cfafcc" Jan 31 17:02:09 crc kubenswrapper[4730]: I0131 17:02:09.608017 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" exitCode=1 Jan 31 17:02:09 crc kubenswrapper[4730]: I0131 17:02:09.608084 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" exitCode=1 Jan 31 17:02:09 crc kubenswrapper[4730]: I0131 17:02:09.608081 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e"} Jan 31 17:02:09 crc kubenswrapper[4730]: I0131 17:02:09.608208 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e"} Jan 31 17:02:09 crc kubenswrapper[4730]: I0131 17:02:09.608984 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:02:09 crc kubenswrapper[4730]: I0131 17:02:09.609059 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:02:09 crc kubenswrapper[4730]: I0131 17:02:09.609154 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:02:09 crc kubenswrapper[4730]: E0131 17:02:09.609522 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:02:09 crc kubenswrapper[4730]: I0131 17:02:09.690579 4730 scope.go:117] "RemoveContainer" containerID="9881eae872846b80faae44b5bfa2639e6341b0fc911f1805aa619dab1ef2ec1d" Jan 31 17:02:09 crc kubenswrapper[4730]: I0131 17:02:09.741422 4730 scope.go:117] "RemoveContainer" containerID="c26c4885350e51b6f4a9bc24b7ffae526416b7671d02fc912a66f64d41097da4" Jan 31 17:02:10 crc kubenswrapper[4730]: I0131 17:02:10.628154 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:02:10 crc kubenswrapper[4730]: I0131 17:02:10.629830 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:02:10 crc kubenswrapper[4730]: I0131 17:02:10.630094 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:02:10 crc kubenswrapper[4730]: E0131 17:02:10.631050 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:02:11 crc kubenswrapper[4730]: I0131 17:02:11.797490 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9rk74"] Jan 31 17:02:11 crc kubenswrapper[4730]: E0131 17:02:11.797955 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2480e28-9925-4151-90a2-8db7d28e20f3" containerName="keystone-cron" Jan 31 17:02:11 crc kubenswrapper[4730]: I0131 17:02:11.797972 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2480e28-9925-4151-90a2-8db7d28e20f3" containerName="keystone-cron" Jan 31 17:02:11 crc kubenswrapper[4730]: I0131 17:02:11.798250 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2480e28-9925-4151-90a2-8db7d28e20f3" containerName="keystone-cron" Jan 31 17:02:11 crc kubenswrapper[4730]: I0131 17:02:11.799821 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9rk74" Jan 31 17:02:11 crc kubenswrapper[4730]: I0131 17:02:11.812468 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9rk74"] Jan 31 17:02:11 crc kubenswrapper[4730]: I0131 17:02:11.866398 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c66ct\" (UniqueName: \"kubernetes.io/projected/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-kube-api-access-c66ct\") pod \"certified-operators-9rk74\" (UID: \"12d9a12e-369c-457e-9dfb-a4cfa59b32ee\") " pod="openshift-marketplace/certified-operators-9rk74" Jan 31 17:02:11 crc kubenswrapper[4730]: I0131 17:02:11.866467 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-catalog-content\") pod \"certified-operators-9rk74\" (UID: \"12d9a12e-369c-457e-9dfb-a4cfa59b32ee\") " pod="openshift-marketplace/certified-operators-9rk74" Jan 31 17:02:11 crc kubenswrapper[4730]: I0131 17:02:11.866510 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-utilities\") pod \"certified-operators-9rk74\" (UID: \"12d9a12e-369c-457e-9dfb-a4cfa59b32ee\") " pod="openshift-marketplace/certified-operators-9rk74" Jan 31 17:02:11 crc kubenswrapper[4730]: I0131 17:02:11.968548 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c66ct\" (UniqueName: \"kubernetes.io/projected/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-kube-api-access-c66ct\") pod \"certified-operators-9rk74\" (UID: \"12d9a12e-369c-457e-9dfb-a4cfa59b32ee\") " pod="openshift-marketplace/certified-operators-9rk74" Jan 31 17:02:11 crc kubenswrapper[4730]: I0131 17:02:11.968637 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-catalog-content\") pod \"certified-operators-9rk74\" (UID: \"12d9a12e-369c-457e-9dfb-a4cfa59b32ee\") " pod="openshift-marketplace/certified-operators-9rk74" Jan 31 17:02:11 crc kubenswrapper[4730]: I0131 17:02:11.968665 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-utilities\") pod \"certified-operators-9rk74\" (UID: \"12d9a12e-369c-457e-9dfb-a4cfa59b32ee\") " pod="openshift-marketplace/certified-operators-9rk74" Jan 31 17:02:11 crc kubenswrapper[4730]: I0131 17:02:11.969202 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-catalog-content\") pod \"certified-operators-9rk74\" (UID: \"12d9a12e-369c-457e-9dfb-a4cfa59b32ee\") " pod="openshift-marketplace/certified-operators-9rk74" Jan 31 17:02:11 crc kubenswrapper[4730]: I0131 17:02:11.969241 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-utilities\") pod \"certified-operators-9rk74\" (UID: \"12d9a12e-369c-457e-9dfb-a4cfa59b32ee\") " pod="openshift-marketplace/certified-operators-9rk74" Jan 31 17:02:12 crc kubenswrapper[4730]: I0131 17:02:11.987639 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c66ct\" (UniqueName: \"kubernetes.io/projected/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-kube-api-access-c66ct\") pod \"certified-operators-9rk74\" (UID: \"12d9a12e-369c-457e-9dfb-a4cfa59b32ee\") " pod="openshift-marketplace/certified-operators-9rk74" Jan 31 17:02:12 crc kubenswrapper[4730]: I0131 17:02:12.127701 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9rk74" Jan 31 17:02:12 crc kubenswrapper[4730]: I0131 17:02:12.606920 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9rk74"] Jan 31 17:02:12 crc kubenswrapper[4730]: W0131 17:02:12.608069 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12d9a12e_369c_457e_9dfb_a4cfa59b32ee.slice/crio-135cf1f05fe1d3bd5d82e20a51c4cc2941ad5cf8d3544919214a9f0cfab549ed WatchSource:0}: Error finding container 135cf1f05fe1d3bd5d82e20a51c4cc2941ad5cf8d3544919214a9f0cfab549ed: Status 404 returned error can't find the container with id 135cf1f05fe1d3bd5d82e20a51c4cc2941ad5cf8d3544919214a9f0cfab549ed Jan 31 17:02:12 crc kubenswrapper[4730]: I0131 17:02:12.642414 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rk74" event={"ID":"12d9a12e-369c-457e-9dfb-a4cfa59b32ee","Type":"ContainerStarted","Data":"135cf1f05fe1d3bd5d82e20a51c4cc2941ad5cf8d3544919214a9f0cfab549ed"} Jan 31 17:02:13 crc kubenswrapper[4730]: I0131 17:02:13.651602 4730 generic.go:334] "Generic (PLEG): container finished" podID="12d9a12e-369c-457e-9dfb-a4cfa59b32ee" containerID="b5aaff954dc2af4d7046d7429e1a527780cf7632ef7ef306cbd592ca97cc30c5" exitCode=0 Jan 31 17:02:13 crc kubenswrapper[4730]: I0131 17:02:13.651674 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rk74" event={"ID":"12d9a12e-369c-457e-9dfb-a4cfa59b32ee","Type":"ContainerDied","Data":"b5aaff954dc2af4d7046d7429e1a527780cf7632ef7ef306cbd592ca97cc30c5"} Jan 31 17:02:13 crc kubenswrapper[4730]: I0131 17:02:13.657888 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 17:02:15 crc kubenswrapper[4730]: I0131 17:02:15.673390 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rk74" event={"ID":"12d9a12e-369c-457e-9dfb-a4cfa59b32ee","Type":"ContainerStarted","Data":"05e57532fdedcac2246cb1090de6d14d84a9e740292ab07ebe194911dad195fd"} Jan 31 17:02:17 crc kubenswrapper[4730]: I0131 17:02:17.696078 4730 generic.go:334] "Generic (PLEG): container finished" podID="12d9a12e-369c-457e-9dfb-a4cfa59b32ee" containerID="05e57532fdedcac2246cb1090de6d14d84a9e740292ab07ebe194911dad195fd" exitCode=0 Jan 31 17:02:17 crc kubenswrapper[4730]: I0131 17:02:17.696190 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rk74" event={"ID":"12d9a12e-369c-457e-9dfb-a4cfa59b32ee","Type":"ContainerDied","Data":"05e57532fdedcac2246cb1090de6d14d84a9e740292ab07ebe194911dad195fd"} Jan 31 17:02:18 crc kubenswrapper[4730]: I0131 17:02:18.707621 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rk74" event={"ID":"12d9a12e-369c-457e-9dfb-a4cfa59b32ee","Type":"ContainerStarted","Data":"e9cb81d8c3f7ad2474083e19c54caf95f4c48ba5d72dc92a828f70bc4f014dff"} Jan 31 17:02:18 crc kubenswrapper[4730]: I0131 17:02:18.735024 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9rk74" podStartSLOduration=3.316222275 podStartE2EDuration="7.735007296s" podCreationTimestamp="2026-01-31 17:02:11 +0000 UTC" firstStartedPulling="2026-01-31 17:02:13.657307928 +0000 UTC m=+1920.463364884" lastFinishedPulling="2026-01-31 17:02:18.076092959 +0000 UTC m=+1924.882149905" observedRunningTime="2026-01-31 17:02:18.733490474 +0000 UTC m=+1925.539547400" watchObservedRunningTime="2026-01-31 17:02:18.735007296 +0000 UTC m=+1925.541064222" Jan 31 17:02:19 crc kubenswrapper[4730]: I0131 17:02:19.464571 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:02:19 crc kubenswrapper[4730]: I0131 17:02:19.465003 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:02:19 crc kubenswrapper[4730]: I0131 17:02:19.465028 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:02:19 crc kubenswrapper[4730]: E0131 17:02:19.465262 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:02:19 crc kubenswrapper[4730]: E0131 17:02:19.465344 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:02:22 crc kubenswrapper[4730]: I0131 17:02:22.128859 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9rk74" Jan 31 17:02:22 crc kubenswrapper[4730]: I0131 17:02:22.129185 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9rk74" Jan 31 17:02:22 crc kubenswrapper[4730]: I0131 17:02:22.173485 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9rk74" Jan 31 17:02:25 crc kubenswrapper[4730]: I0131 17:02:25.465440 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:02:25 crc kubenswrapper[4730]: I0131 17:02:25.466153 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:02:25 crc kubenswrapper[4730]: I0131 17:02:25.466357 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:02:25 crc kubenswrapper[4730]: E0131 17:02:25.467086 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:02:32 crc kubenswrapper[4730]: I0131 17:02:32.192289 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9rk74" Jan 31 17:02:32 crc kubenswrapper[4730]: I0131 17:02:32.248719 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9rk74"] Jan 31 17:02:32 crc kubenswrapper[4730]: I0131 17:02:32.848095 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9rk74" podUID="12d9a12e-369c-457e-9dfb-a4cfa59b32ee" containerName="registry-server" containerID="cri-o://e9cb81d8c3f7ad2474083e19c54caf95f4c48ba5d72dc92a828f70bc4f014dff" gracePeriod=2 Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.349207 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9rk74" Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.371920 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c66ct\" (UniqueName: \"kubernetes.io/projected/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-kube-api-access-c66ct\") pod \"12d9a12e-369c-457e-9dfb-a4cfa59b32ee\" (UID: \"12d9a12e-369c-457e-9dfb-a4cfa59b32ee\") " Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.372174 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-utilities\") pod \"12d9a12e-369c-457e-9dfb-a4cfa59b32ee\" (UID: \"12d9a12e-369c-457e-9dfb-a4cfa59b32ee\") " Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.372238 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-catalog-content\") pod \"12d9a12e-369c-457e-9dfb-a4cfa59b32ee\" (UID: \"12d9a12e-369c-457e-9dfb-a4cfa59b32ee\") " Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.373205 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-utilities" (OuterVolumeSpecName: "utilities") pod "12d9a12e-369c-457e-9dfb-a4cfa59b32ee" (UID: "12d9a12e-369c-457e-9dfb-a4cfa59b32ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.386899 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-kube-api-access-c66ct" (OuterVolumeSpecName: "kube-api-access-c66ct") pod "12d9a12e-369c-457e-9dfb-a4cfa59b32ee" (UID: "12d9a12e-369c-457e-9dfb-a4cfa59b32ee"). InnerVolumeSpecName "kube-api-access-c66ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.432535 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12d9a12e-369c-457e-9dfb-a4cfa59b32ee" (UID: "12d9a12e-369c-457e-9dfb-a4cfa59b32ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.474827 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.474859 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.474875 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c66ct\" (UniqueName: \"kubernetes.io/projected/12d9a12e-369c-457e-9dfb-a4cfa59b32ee-kube-api-access-c66ct\") on node \"crc\" DevicePath \"\"" Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.862452 4730 generic.go:334] "Generic (PLEG): container finished" podID="12d9a12e-369c-457e-9dfb-a4cfa59b32ee" containerID="e9cb81d8c3f7ad2474083e19c54caf95f4c48ba5d72dc92a828f70bc4f014dff" exitCode=0 Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.862492 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rk74" event={"ID":"12d9a12e-369c-457e-9dfb-a4cfa59b32ee","Type":"ContainerDied","Data":"e9cb81d8c3f7ad2474083e19c54caf95f4c48ba5d72dc92a828f70bc4f014dff"} Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.862517 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rk74" event={"ID":"12d9a12e-369c-457e-9dfb-a4cfa59b32ee","Type":"ContainerDied","Data":"135cf1f05fe1d3bd5d82e20a51c4cc2941ad5cf8d3544919214a9f0cfab549ed"} Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.862535 4730 scope.go:117] "RemoveContainer" containerID="e9cb81d8c3f7ad2474083e19c54caf95f4c48ba5d72dc92a828f70bc4f014dff" Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.862556 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9rk74" Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.897319 4730 scope.go:117] "RemoveContainer" containerID="05e57532fdedcac2246cb1090de6d14d84a9e740292ab07ebe194911dad195fd" Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.900484 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9rk74"] Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.907578 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9rk74"] Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.930502 4730 scope.go:117] "RemoveContainer" containerID="b5aaff954dc2af4d7046d7429e1a527780cf7632ef7ef306cbd592ca97cc30c5" Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.978491 4730 scope.go:117] "RemoveContainer" containerID="e9cb81d8c3f7ad2474083e19c54caf95f4c48ba5d72dc92a828f70bc4f014dff" Jan 31 17:02:33 crc kubenswrapper[4730]: E0131 17:02:33.979990 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9cb81d8c3f7ad2474083e19c54caf95f4c48ba5d72dc92a828f70bc4f014dff\": container with ID starting with e9cb81d8c3f7ad2474083e19c54caf95f4c48ba5d72dc92a828f70bc4f014dff not found: ID does not exist" containerID="e9cb81d8c3f7ad2474083e19c54caf95f4c48ba5d72dc92a828f70bc4f014dff" Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.980022 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9cb81d8c3f7ad2474083e19c54caf95f4c48ba5d72dc92a828f70bc4f014dff"} err="failed to get container status \"e9cb81d8c3f7ad2474083e19c54caf95f4c48ba5d72dc92a828f70bc4f014dff\": rpc error: code = NotFound desc = could not find container \"e9cb81d8c3f7ad2474083e19c54caf95f4c48ba5d72dc92a828f70bc4f014dff\": container with ID starting with e9cb81d8c3f7ad2474083e19c54caf95f4c48ba5d72dc92a828f70bc4f014dff not found: ID does not exist" Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.980049 4730 scope.go:117] "RemoveContainer" containerID="05e57532fdedcac2246cb1090de6d14d84a9e740292ab07ebe194911dad195fd" Jan 31 17:02:33 crc kubenswrapper[4730]: E0131 17:02:33.980590 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05e57532fdedcac2246cb1090de6d14d84a9e740292ab07ebe194911dad195fd\": container with ID starting with 05e57532fdedcac2246cb1090de6d14d84a9e740292ab07ebe194911dad195fd not found: ID does not exist" containerID="05e57532fdedcac2246cb1090de6d14d84a9e740292ab07ebe194911dad195fd" Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.980630 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05e57532fdedcac2246cb1090de6d14d84a9e740292ab07ebe194911dad195fd"} err="failed to get container status \"05e57532fdedcac2246cb1090de6d14d84a9e740292ab07ebe194911dad195fd\": rpc error: code = NotFound desc = could not find container \"05e57532fdedcac2246cb1090de6d14d84a9e740292ab07ebe194911dad195fd\": container with ID starting with 05e57532fdedcac2246cb1090de6d14d84a9e740292ab07ebe194911dad195fd not found: ID does not exist" Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.980657 4730 scope.go:117] "RemoveContainer" containerID="b5aaff954dc2af4d7046d7429e1a527780cf7632ef7ef306cbd592ca97cc30c5" Jan 31 17:02:33 crc kubenswrapper[4730]: E0131 17:02:33.980989 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5aaff954dc2af4d7046d7429e1a527780cf7632ef7ef306cbd592ca97cc30c5\": container with ID starting with b5aaff954dc2af4d7046d7429e1a527780cf7632ef7ef306cbd592ca97cc30c5 not found: ID does not exist" containerID="b5aaff954dc2af4d7046d7429e1a527780cf7632ef7ef306cbd592ca97cc30c5" Jan 31 17:02:33 crc kubenswrapper[4730]: I0131 17:02:33.981046 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5aaff954dc2af4d7046d7429e1a527780cf7632ef7ef306cbd592ca97cc30c5"} err="failed to get container status \"b5aaff954dc2af4d7046d7429e1a527780cf7632ef7ef306cbd592ca97cc30c5\": rpc error: code = NotFound desc = could not find container \"b5aaff954dc2af4d7046d7429e1a527780cf7632ef7ef306cbd592ca97cc30c5\": container with ID starting with b5aaff954dc2af4d7046d7429e1a527780cf7632ef7ef306cbd592ca97cc30c5 not found: ID does not exist" Jan 31 17:02:34 crc kubenswrapper[4730]: I0131 17:02:34.471208 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:02:34 crc kubenswrapper[4730]: I0131 17:02:34.471546 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:02:34 crc kubenswrapper[4730]: I0131 17:02:34.471576 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:02:34 crc kubenswrapper[4730]: E0131 17:02:34.472008 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:02:34 crc kubenswrapper[4730]: E0131 17:02:34.472009 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:02:34 crc kubenswrapper[4730]: I0131 17:02:34.478086 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12d9a12e-369c-457e-9dfb-a4cfa59b32ee" path="/var/lib/kubelet/pods/12d9a12e-369c-457e-9dfb-a4cfa59b32ee/volumes" Jan 31 17:02:40 crc kubenswrapper[4730]: I0131 17:02:40.465693 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:02:40 crc kubenswrapper[4730]: I0131 17:02:40.466545 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:02:40 crc kubenswrapper[4730]: I0131 17:02:40.466832 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:02:40 crc kubenswrapper[4730]: E0131 17:02:40.468501 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:02:46 crc kubenswrapper[4730]: I0131 17:02:46.464296 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:02:46 crc kubenswrapper[4730]: E0131 17:02:46.465226 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:02:47 crc kubenswrapper[4730]: I0131 17:02:47.464631 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:02:47 crc kubenswrapper[4730]: I0131 17:02:47.464993 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:02:47 crc kubenswrapper[4730]: E0131 17:02:47.465356 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:02:55 crc kubenswrapper[4730]: I0131 17:02:55.464903 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:02:55 crc kubenswrapper[4730]: I0131 17:02:55.466754 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:02:55 crc kubenswrapper[4730]: I0131 17:02:55.466974 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:02:55 crc kubenswrapper[4730]: E0131 17:02:55.467348 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:03:01 crc kubenswrapper[4730]: I0131 17:03:01.464557 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:03:01 crc kubenswrapper[4730]: E0131 17:03:01.465817 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:03:02 crc kubenswrapper[4730]: I0131 17:03:02.464620 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:03:02 crc kubenswrapper[4730]: I0131 17:03:02.464658 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:03:02 crc kubenswrapper[4730]: E0131 17:03:02.465109 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:03:07 crc kubenswrapper[4730]: I0131 17:03:07.465225 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:03:07 crc kubenswrapper[4730]: I0131 17:03:07.466002 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:03:07 crc kubenswrapper[4730]: I0131 17:03:07.466239 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:03:07 crc kubenswrapper[4730]: E0131 17:03:07.466902 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:03:13 crc kubenswrapper[4730]: I0131 17:03:13.465044 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:03:13 crc kubenswrapper[4730]: I0131 17:03:13.465705 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:03:13 crc kubenswrapper[4730]: E0131 17:03:13.466056 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:03:14 crc kubenswrapper[4730]: I0131 17:03:14.473196 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:03:14 crc kubenswrapper[4730]: E0131 17:03:14.473733 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:03:19 crc kubenswrapper[4730]: I0131 17:03:19.465338 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:03:19 crc kubenswrapper[4730]: I0131 17:03:19.466095 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:03:19 crc kubenswrapper[4730]: I0131 17:03:19.466213 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:03:19 crc kubenswrapper[4730]: E0131 17:03:19.466632 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:03:26 crc kubenswrapper[4730]: I0131 17:03:26.465873 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:03:26 crc kubenswrapper[4730]: E0131 17:03:26.466773 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:03:27 crc kubenswrapper[4730]: I0131 17:03:27.463917 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:03:27 crc kubenswrapper[4730]: I0131 17:03:27.463942 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:03:27 crc kubenswrapper[4730]: E0131 17:03:27.464242 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:03:29 crc kubenswrapper[4730]: I0131 17:03:29.221245 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:03:29 crc kubenswrapper[4730]: E0131 17:03:29.221452 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 17:03:29 crc kubenswrapper[4730]: E0131 17:03:29.221923 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 17:05:31.221894533 +0000 UTC m=+2118.027951459 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 17:03:33 crc kubenswrapper[4730]: I0131 17:03:33.465573 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:03:33 crc kubenswrapper[4730]: I0131 17:03:33.466404 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:03:33 crc kubenswrapper[4730]: I0131 17:03:33.466599 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:03:33 crc kubenswrapper[4730]: E0131 17:03:33.467184 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:03:36 crc kubenswrapper[4730]: E0131 17:03:36.283341 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 17:03:36 crc kubenswrapper[4730]: I0131 17:03:36.474027 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" exitCode=1 Jan 31 17:03:36 crc kubenswrapper[4730]: I0131 17:03:36.474140 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:03:36 crc kubenswrapper[4730]: I0131 17:03:36.480850 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601"} Jan 31 17:03:36 crc kubenswrapper[4730]: I0131 17:03:36.480956 4730 scope.go:117] "RemoveContainer" containerID="d4df099ba5d21d669289d474ec7fa0dd3b5838019855de2d24da990586488563" Jan 31 17:03:36 crc kubenswrapper[4730]: I0131 17:03:36.482649 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:03:36 crc kubenswrapper[4730]: I0131 17:03:36.482738 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:03:36 crc kubenswrapper[4730]: I0131 17:03:36.482855 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:03:36 crc kubenswrapper[4730]: I0131 17:03:36.482880 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:03:36 crc kubenswrapper[4730]: E0131 17:03:36.483273 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:03:38 crc kubenswrapper[4730]: I0131 17:03:38.465633 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:03:39 crc kubenswrapper[4730]: I0131 17:03:39.513189 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerStarted","Data":"f8668f98817acfc5fd3cfd4762ca185e124bba2a71d4c129e398e40d29fa8b09"} Jan 31 17:03:40 crc kubenswrapper[4730]: I0131 17:03:40.464568 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:03:40 crc kubenswrapper[4730]: I0131 17:03:40.464912 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:03:40 crc kubenswrapper[4730]: E0131 17:03:40.465213 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:03:50 crc kubenswrapper[4730]: I0131 17:03:50.465639 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:03:50 crc kubenswrapper[4730]: I0131 17:03:50.466382 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:03:50 crc kubenswrapper[4730]: I0131 17:03:50.466546 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:03:50 crc kubenswrapper[4730]: I0131 17:03:50.466562 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:03:50 crc kubenswrapper[4730]: E0131 17:03:50.467182 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:03:52 crc kubenswrapper[4730]: I0131 17:03:52.463993 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:03:52 crc kubenswrapper[4730]: I0131 17:03:52.464317 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:03:52 crc kubenswrapper[4730]: E0131 17:03:52.710222 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:03:53 crc kubenswrapper[4730]: I0131 17:03:53.637366 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f"} Jan 31 17:03:53 crc kubenswrapper[4730]: I0131 17:03:53.637726 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:03:53 crc kubenswrapper[4730]: I0131 17:03:53.638454 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:03:53 crc kubenswrapper[4730]: E0131 17:03:53.638830 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:03:54 crc kubenswrapper[4730]: I0131 17:03:54.647311 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" exitCode=1 Jan 31 17:03:54 crc kubenswrapper[4730]: I0131 17:03:54.647576 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f"} Jan 31 17:03:54 crc kubenswrapper[4730]: I0131 17:03:54.647609 4730 scope.go:117] "RemoveContainer" containerID="58f98fa2dfb89efcffa1bb7d8a90bfa52b7130afa2d7c93235eb3f628199541d" Jan 31 17:03:54 crc kubenswrapper[4730]: I0131 17:03:54.648204 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:03:54 crc kubenswrapper[4730]: I0131 17:03:54.648231 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:03:54 crc kubenswrapper[4730]: E0131 17:03:54.648687 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:03:54 crc kubenswrapper[4730]: I0131 17:03:54.653647 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:03:55 crc kubenswrapper[4730]: I0131 17:03:55.658311 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:03:55 crc kubenswrapper[4730]: I0131 17:03:55.658681 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:03:55 crc kubenswrapper[4730]: E0131 17:03:55.659014 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:03:56 crc kubenswrapper[4730]: I0131 17:03:56.667180 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:03:56 crc kubenswrapper[4730]: I0131 17:03:56.667218 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:03:56 crc kubenswrapper[4730]: E0131 17:03:56.667695 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:04:04 crc kubenswrapper[4730]: I0131 17:04:04.494494 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:04:04 crc kubenswrapper[4730]: I0131 17:04:04.495256 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:04:04 crc kubenswrapper[4730]: I0131 17:04:04.495478 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:04:04 crc kubenswrapper[4730]: I0131 17:04:04.495491 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:04:04 crc kubenswrapper[4730]: E0131 17:04:04.496675 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:04:11 crc kubenswrapper[4730]: I0131 17:04:11.464705 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:04:11 crc kubenswrapper[4730]: I0131 17:04:11.464993 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:04:11 crc kubenswrapper[4730]: E0131 17:04:11.465429 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:04:19 crc kubenswrapper[4730]: I0131 17:04:19.464322 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:04:19 crc kubenswrapper[4730]: I0131 17:04:19.464827 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:04:19 crc kubenswrapper[4730]: I0131 17:04:19.464924 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:04:19 crc kubenswrapper[4730]: I0131 17:04:19.464934 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:04:19 crc kubenswrapper[4730]: E0131 17:04:19.465440 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:04:23 crc kubenswrapper[4730]: I0131 17:04:23.465628 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:04:23 crc kubenswrapper[4730]: I0131 17:04:23.465988 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:04:23 crc kubenswrapper[4730]: E0131 17:04:23.466198 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:04:31 crc kubenswrapper[4730]: I0131 17:04:31.467722 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:04:31 crc kubenswrapper[4730]: I0131 17:04:31.470297 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:04:31 crc kubenswrapper[4730]: I0131 17:04:31.470874 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:04:31 crc kubenswrapper[4730]: I0131 17:04:31.470923 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:04:31 crc kubenswrapper[4730]: E0131 17:04:31.472177 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:04:34 crc kubenswrapper[4730]: I0131 17:04:34.467921 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:04:34 crc kubenswrapper[4730]: I0131 17:04:34.468519 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:04:34 crc kubenswrapper[4730]: E0131 17:04:34.701834 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:04:35 crc kubenswrapper[4730]: I0131 17:04:35.010850 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"348a90accfa92ae509b09fbd48c20be1d07295aa9f5e491bf15fb4d2b461d324"} Jan 31 17:04:35 crc kubenswrapper[4730]: I0131 17:04:35.011151 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:04:35 crc kubenswrapper[4730]: I0131 17:04:35.011451 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:04:35 crc kubenswrapper[4730]: E0131 17:04:35.011640 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:04:36 crc kubenswrapper[4730]: I0131 17:04:36.027912 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:04:36 crc kubenswrapper[4730]: E0131 17:04:36.028160 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:04:39 crc kubenswrapper[4730]: I0131 17:04:39.665318 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:04:40 crc kubenswrapper[4730]: I0131 17:04:40.661143 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:04:42 crc kubenswrapper[4730]: I0131 17:04:42.668203 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:04:44 crc kubenswrapper[4730]: I0131 17:04:44.473414 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:04:44 crc kubenswrapper[4730]: I0131 17:04:44.474064 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:04:44 crc kubenswrapper[4730]: I0131 17:04:44.474484 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:04:44 crc kubenswrapper[4730]: I0131 17:04:44.474493 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:04:44 crc kubenswrapper[4730]: E0131 17:04:44.474915 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:04:45 crc kubenswrapper[4730]: I0131 17:04:45.657305 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:04:45 crc kubenswrapper[4730]: I0131 17:04:45.660363 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:04:45 crc kubenswrapper[4730]: I0131 17:04:45.660532 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:04:45 crc kubenswrapper[4730]: I0131 17:04:45.661505 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"348a90accfa92ae509b09fbd48c20be1d07295aa9f5e491bf15fb4d2b461d324"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 17:04:45 crc kubenswrapper[4730]: I0131 17:04:45.661631 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:04:45 crc kubenswrapper[4730]: I0131 17:04:45.661767 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://348a90accfa92ae509b09fbd48c20be1d07295aa9f5e491bf15fb4d2b461d324" gracePeriod=30 Jan 31 17:04:45 crc kubenswrapper[4730]: I0131 17:04:45.666400 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:04:45 crc kubenswrapper[4730]: E0131 17:04:45.961601 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:04:46 crc kubenswrapper[4730]: I0131 17:04:46.125644 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="348a90accfa92ae509b09fbd48c20be1d07295aa9f5e491bf15fb4d2b461d324" exitCode=0 Jan 31 17:04:46 crc kubenswrapper[4730]: I0131 17:04:46.125688 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"348a90accfa92ae509b09fbd48c20be1d07295aa9f5e491bf15fb4d2b461d324"} Jan 31 17:04:46 crc kubenswrapper[4730]: I0131 17:04:46.125718 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d"} Jan 31 17:04:46 crc kubenswrapper[4730]: I0131 17:04:46.125739 4730 scope.go:117] "RemoveContainer" containerID="3d3793b67d5bac6c41d6909cd37ec1087d8a064c7587483e4c9754749c1d6506" Jan 31 17:04:46 crc kubenswrapper[4730]: I0131 17:04:46.126026 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:04:46 crc kubenswrapper[4730]: I0131 17:04:46.126499 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:04:46 crc kubenswrapper[4730]: E0131 17:04:46.126761 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:04:47 crc kubenswrapper[4730]: I0131 17:04:47.139492 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:04:47 crc kubenswrapper[4730]: E0131 17:04:47.140043 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:04:50 crc kubenswrapper[4730]: I0131 17:04:50.666637 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:04:51 crc kubenswrapper[4730]: I0131 17:04:51.658564 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:04:54 crc kubenswrapper[4730]: I0131 17:04:54.665659 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:04:55 crc kubenswrapper[4730]: I0131 17:04:55.658499 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:04:57 crc kubenswrapper[4730]: I0131 17:04:57.663276 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:04:57 crc kubenswrapper[4730]: I0131 17:04:57.663703 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:04:57 crc kubenswrapper[4730]: I0131 17:04:57.664799 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 17:04:57 crc kubenswrapper[4730]: I0131 17:04:57.664870 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:04:57 crc kubenswrapper[4730]: I0131 17:04:57.664922 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" gracePeriod=30 Jan 31 17:04:57 crc kubenswrapper[4730]: I0131 17:04:57.671500 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.176:8080/healthcheck\": EOF" Jan 31 17:04:57 crc kubenswrapper[4730]: E0131 17:04:57.789885 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:04:58 crc kubenswrapper[4730]: I0131 17:04:58.231155 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" exitCode=0 Jan 31 17:04:58 crc kubenswrapper[4730]: I0131 17:04:58.231289 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d"} Jan 31 17:04:58 crc kubenswrapper[4730]: I0131 17:04:58.231471 4730 scope.go:117] "RemoveContainer" containerID="348a90accfa92ae509b09fbd48c20be1d07295aa9f5e491bf15fb4d2b461d324" Jan 31 17:04:58 crc kubenswrapper[4730]: I0131 17:04:58.232666 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:04:58 crc kubenswrapper[4730]: I0131 17:04:58.232724 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:04:58 crc kubenswrapper[4730]: E0131 17:04:58.233267 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:04:58 crc kubenswrapper[4730]: I0131 17:04:58.465385 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:04:58 crc kubenswrapper[4730]: I0131 17:04:58.465478 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:04:58 crc kubenswrapper[4730]: I0131 17:04:58.465585 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:04:58 crc kubenswrapper[4730]: I0131 17:04:58.465594 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:04:58 crc kubenswrapper[4730]: E0131 17:04:58.465991 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:05:10 crc kubenswrapper[4730]: I0131 17:05:10.465001 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:05:10 crc kubenswrapper[4730]: I0131 17:05:10.465759 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:05:10 crc kubenswrapper[4730]: I0131 17:05:10.465975 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:05:10 crc kubenswrapper[4730]: I0131 17:05:10.465990 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:05:10 crc kubenswrapper[4730]: E0131 17:05:10.466638 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:05:12 crc kubenswrapper[4730]: I0131 17:05:12.466277 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:05:12 crc kubenswrapper[4730]: I0131 17:05:12.466306 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:05:12 crc kubenswrapper[4730]: E0131 17:05:12.466611 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:05:22 crc kubenswrapper[4730]: I0131 17:05:22.466044 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:05:22 crc kubenswrapper[4730]: I0131 17:05:22.466733 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:05:22 crc kubenswrapper[4730]: I0131 17:05:22.466946 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:05:22 crc kubenswrapper[4730]: I0131 17:05:22.466962 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:05:22 crc kubenswrapper[4730]: E0131 17:05:22.467535 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:05:24 crc kubenswrapper[4730]: I0131 17:05:24.477998 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:05:24 crc kubenswrapper[4730]: I0131 17:05:24.478341 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:05:24 crc kubenswrapper[4730]: E0131 17:05:24.480004 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:05:31 crc kubenswrapper[4730]: I0131 17:05:31.255304 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:05:31 crc kubenswrapper[4730]: E0131 17:05:31.255486 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 17:05:31 crc kubenswrapper[4730]: E0131 17:05:31.256082 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 17:07:33.256056664 +0000 UTC m=+2240.062113620 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 17:05:36 crc kubenswrapper[4730]: I0131 17:05:36.463949 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:05:36 crc kubenswrapper[4730]: I0131 17:05:36.464380 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:05:36 crc kubenswrapper[4730]: I0131 17:05:36.464476 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:05:36 crc kubenswrapper[4730]: I0131 17:05:36.464484 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:05:36 crc kubenswrapper[4730]: E0131 17:05:36.464746 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:05:39 crc kubenswrapper[4730]: I0131 17:05:39.464370 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:05:39 crc kubenswrapper[4730]: I0131 17:05:39.465090 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:05:39 crc kubenswrapper[4730]: E0131 17:05:39.465518 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:05:39 crc kubenswrapper[4730]: E0131 17:05:39.475490 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 17:05:39 crc kubenswrapper[4730]: I0131 17:05:39.682045 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:05:51 crc kubenswrapper[4730]: I0131 17:05:51.465410 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:05:51 crc kubenswrapper[4730]: I0131 17:05:51.466053 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:05:51 crc kubenswrapper[4730]: I0131 17:05:51.466130 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:05:51 crc kubenswrapper[4730]: I0131 17:05:51.466138 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:05:51 crc kubenswrapper[4730]: E0131 17:05:51.466470 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:05:53 crc kubenswrapper[4730]: I0131 17:05:53.464770 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:05:53 crc kubenswrapper[4730]: I0131 17:05:53.465288 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:05:53 crc kubenswrapper[4730]: E0131 17:05:53.465794 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:05:56 crc kubenswrapper[4730]: I0131 17:05:56.975460 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 17:05:56 crc kubenswrapper[4730]: I0131 17:05:56.975983 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 17:06:02 crc kubenswrapper[4730]: I0131 17:06:02.465660 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:06:02 crc kubenswrapper[4730]: I0131 17:06:02.466249 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:06:02 crc kubenswrapper[4730]: I0131 17:06:02.466376 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:06:02 crc kubenswrapper[4730]: I0131 17:06:02.466390 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:06:02 crc kubenswrapper[4730]: E0131 17:06:02.467017 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:06:06 crc kubenswrapper[4730]: I0131 17:06:06.464861 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:06:06 crc kubenswrapper[4730]: I0131 17:06:06.465643 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:06:06 crc kubenswrapper[4730]: E0131 17:06:06.465896 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:06:16 crc kubenswrapper[4730]: I0131 17:06:16.465196 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:06:16 crc kubenswrapper[4730]: I0131 17:06:16.465738 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:06:16 crc kubenswrapper[4730]: I0131 17:06:16.465867 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:06:16 crc kubenswrapper[4730]: I0131 17:06:16.465879 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:06:16 crc kubenswrapper[4730]: E0131 17:06:16.466284 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:06:19 crc kubenswrapper[4730]: I0131 17:06:19.464293 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:06:19 crc kubenswrapper[4730]: I0131 17:06:19.464613 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:06:19 crc kubenswrapper[4730]: E0131 17:06:19.464927 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:06:26 crc kubenswrapper[4730]: I0131 17:06:26.975941 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 17:06:26 crc kubenswrapper[4730]: I0131 17:06:26.976579 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 17:06:29 crc kubenswrapper[4730]: I0131 17:06:29.157945 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6" exitCode=1 Jan 31 17:06:29 crc kubenswrapper[4730]: I0131 17:06:29.158197 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6"} Jan 31 17:06:29 crc kubenswrapper[4730]: I0131 17:06:29.158492 4730 scope.go:117] "RemoveContainer" containerID="17f7a33830c8777b805c6edba65283177f0229b21a224ed3b5e8e58184905db3" Jan 31 17:06:29 crc kubenswrapper[4730]: I0131 17:06:29.159437 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:06:29 crc kubenswrapper[4730]: I0131 17:06:29.159514 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:06:29 crc kubenswrapper[4730]: I0131 17:06:29.159544 4730 scope.go:117] "RemoveContainer" containerID="9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6" Jan 31 17:06:29 crc kubenswrapper[4730]: I0131 17:06:29.159622 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:06:29 crc kubenswrapper[4730]: I0131 17:06:29.159631 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:06:29 crc kubenswrapper[4730]: E0131 17:06:29.160176 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:06:32 crc kubenswrapper[4730]: I0131 17:06:32.464615 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:06:32 crc kubenswrapper[4730]: I0131 17:06:32.465124 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:06:32 crc kubenswrapper[4730]: E0131 17:06:32.465646 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:06:39 crc kubenswrapper[4730]: I0131 17:06:39.464609 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:06:39 crc kubenswrapper[4730]: I0131 17:06:39.465233 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:06:39 crc kubenswrapper[4730]: I0131 17:06:39.465255 4730 scope.go:117] "RemoveContainer" containerID="9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6" Jan 31 17:06:39 crc kubenswrapper[4730]: I0131 17:06:39.465299 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:06:39 crc kubenswrapper[4730]: I0131 17:06:39.465307 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:06:39 crc kubenswrapper[4730]: E0131 17:06:39.465615 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:06:43 crc kubenswrapper[4730]: I0131 17:06:43.464198 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:06:43 crc kubenswrapper[4730]: I0131 17:06:43.464577 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:06:43 crc kubenswrapper[4730]: E0131 17:06:43.464949 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:06:54 crc kubenswrapper[4730]: I0131 17:06:54.467547 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:06:54 crc kubenswrapper[4730]: I0131 17:06:54.468207 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:06:54 crc kubenswrapper[4730]: I0131 17:06:54.468236 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:06:54 crc kubenswrapper[4730]: I0131 17:06:54.468291 4730 scope.go:117] "RemoveContainer" containerID="9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6" Jan 31 17:06:54 crc kubenswrapper[4730]: I0131 17:06:54.468383 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:06:54 crc kubenswrapper[4730]: I0131 17:06:54.468419 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:06:54 crc kubenswrapper[4730]: I0131 17:06:54.468432 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:06:54 crc kubenswrapper[4730]: E0131 17:06:54.468638 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:06:54 crc kubenswrapper[4730]: E0131 17:06:54.468941 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:06:56 crc kubenswrapper[4730]: I0131 17:06:56.975073 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 17:06:56 crc kubenswrapper[4730]: I0131 17:06:56.975376 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 17:06:56 crc kubenswrapper[4730]: I0131 17:06:56.975436 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 17:06:56 crc kubenswrapper[4730]: I0131 17:06:56.976193 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8668f98817acfc5fd3cfd4762ca185e124bba2a71d4c129e398e40d29fa8b09"} pod="openshift-machine-config-operator/machine-config-daemon-mzg47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 17:06:56 crc kubenswrapper[4730]: I0131 17:06:56.976265 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" containerID="cri-o://f8668f98817acfc5fd3cfd4762ca185e124bba2a71d4c129e398e40d29fa8b09" gracePeriod=600 Jan 31 17:06:57 crc kubenswrapper[4730]: I0131 17:06:57.410599 4730 generic.go:334] "Generic (PLEG): container finished" podID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerID="f8668f98817acfc5fd3cfd4762ca185e124bba2a71d4c129e398e40d29fa8b09" exitCode=0 Jan 31 17:06:57 crc kubenswrapper[4730]: I0131 17:06:57.410638 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerDied","Data":"f8668f98817acfc5fd3cfd4762ca185e124bba2a71d4c129e398e40d29fa8b09"} Jan 31 17:06:57 crc kubenswrapper[4730]: I0131 17:06:57.410661 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerStarted","Data":"f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf"} Jan 31 17:06:57 crc kubenswrapper[4730]: I0131 17:06:57.410676 4730 scope.go:117] "RemoveContainer" containerID="1868c5d85a09a51ae52ad3070e620d11b91e6f823145ba40cfe214a5b702dc1d" Jan 31 17:07:05 crc kubenswrapper[4730]: I0131 17:07:05.464548 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:07:05 crc kubenswrapper[4730]: I0131 17:07:05.465135 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:07:05 crc kubenswrapper[4730]: E0131 17:07:05.465603 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:07:06 crc kubenswrapper[4730]: I0131 17:07:06.464549 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:07:06 crc kubenswrapper[4730]: I0131 17:07:06.465144 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:07:06 crc kubenswrapper[4730]: I0131 17:07:06.465173 4730 scope.go:117] "RemoveContainer" containerID="9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6" Jan 31 17:07:06 crc kubenswrapper[4730]: I0131 17:07:06.465234 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:07:06 crc kubenswrapper[4730]: I0131 17:07:06.465243 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:07:06 crc kubenswrapper[4730]: E0131 17:07:06.465651 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:07:17 crc kubenswrapper[4730]: I0131 17:07:17.464943 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:07:17 crc kubenswrapper[4730]: I0131 17:07:17.465412 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:07:17 crc kubenswrapper[4730]: E0131 17:07:17.465657 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:07:19 crc kubenswrapper[4730]: I0131 17:07:19.475428 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:07:19 crc kubenswrapper[4730]: I0131 17:07:19.475818 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:07:19 crc kubenswrapper[4730]: I0131 17:07:19.475850 4730 scope.go:117] "RemoveContainer" containerID="9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6" Jan 31 17:07:19 crc kubenswrapper[4730]: I0131 17:07:19.475982 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:07:19 crc kubenswrapper[4730]: I0131 17:07:19.475993 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:07:20 crc kubenswrapper[4730]: E0131 17:07:20.070359 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:07:20 crc kubenswrapper[4730]: I0131 17:07:20.646469 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" exitCode=1 Jan 31 17:07:20 crc kubenswrapper[4730]: I0131 17:07:20.646500 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" exitCode=1 Jan 31 17:07:20 crc kubenswrapper[4730]: I0131 17:07:20.646508 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" exitCode=1 Jan 31 17:07:20 crc kubenswrapper[4730]: I0131 17:07:20.646528 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5"} Jan 31 17:07:20 crc kubenswrapper[4730]: I0131 17:07:20.646554 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b"} Jan 31 17:07:20 crc kubenswrapper[4730]: I0131 17:07:20.646565 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958"} Jan 31 17:07:20 crc kubenswrapper[4730]: I0131 17:07:20.646581 4730 scope.go:117] "RemoveContainer" containerID="0bdd7a09eafea49dff36e9453a7a3cc7f6970c3106e46acb916ae56c936a7f2e" Jan 31 17:07:20 crc kubenswrapper[4730]: I0131 17:07:20.647371 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:07:20 crc kubenswrapper[4730]: I0131 17:07:20.647432 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:07:20 crc kubenswrapper[4730]: I0131 17:07:20.647475 4730 scope.go:117] "RemoveContainer" containerID="9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6" Jan 31 17:07:20 crc kubenswrapper[4730]: I0131 17:07:20.647521 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:07:20 crc kubenswrapper[4730]: I0131 17:07:20.647528 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:07:20 crc kubenswrapper[4730]: E0131 17:07:20.647870 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:07:20 crc kubenswrapper[4730]: I0131 17:07:20.734114 4730 scope.go:117] "RemoveContainer" containerID="8c7a75e3ff5509d57d741934d2daee98b8f43c10b60a96ddba3d3811cd6b1e0e" Jan 31 17:07:20 crc kubenswrapper[4730]: I0131 17:07:20.782200 4730 scope.go:117] "RemoveContainer" containerID="68f288bb1e1ad99967531ae05e022be6ad3c509cf105c042f3878f92896c2d2e" Jan 31 17:07:23 crc kubenswrapper[4730]: I0131 17:07:23.669107 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p9g62"] Jan 31 17:07:23 crc kubenswrapper[4730]: E0131 17:07:23.670015 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d9a12e-369c-457e-9dfb-a4cfa59b32ee" containerName="extract-utilities" Jan 31 17:07:23 crc kubenswrapper[4730]: I0131 17:07:23.670033 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d9a12e-369c-457e-9dfb-a4cfa59b32ee" containerName="extract-utilities" Jan 31 17:07:23 crc kubenswrapper[4730]: E0131 17:07:23.670059 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d9a12e-369c-457e-9dfb-a4cfa59b32ee" containerName="registry-server" Jan 31 17:07:23 crc kubenswrapper[4730]: I0131 17:07:23.670067 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d9a12e-369c-457e-9dfb-a4cfa59b32ee" containerName="registry-server" Jan 31 17:07:23 crc kubenswrapper[4730]: E0131 17:07:23.670077 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d9a12e-369c-457e-9dfb-a4cfa59b32ee" containerName="extract-content" Jan 31 17:07:23 crc kubenswrapper[4730]: I0131 17:07:23.670086 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d9a12e-369c-457e-9dfb-a4cfa59b32ee" containerName="extract-content" Jan 31 17:07:23 crc kubenswrapper[4730]: I0131 17:07:23.670331 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="12d9a12e-369c-457e-9dfb-a4cfa59b32ee" containerName="registry-server" Jan 31 17:07:23 crc kubenswrapper[4730]: I0131 17:07:23.673859 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p9g62" Jan 31 17:07:23 crc kubenswrapper[4730]: I0131 17:07:23.685225 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p9g62"] Jan 31 17:07:23 crc kubenswrapper[4730]: I0131 17:07:23.740002 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsbgv\" (UniqueName: \"kubernetes.io/projected/67c8a3d8-2892-4679-8962-0bd835970d44-kube-api-access-hsbgv\") pod \"redhat-marketplace-p9g62\" (UID: \"67c8a3d8-2892-4679-8962-0bd835970d44\") " pod="openshift-marketplace/redhat-marketplace-p9g62" Jan 31 17:07:23 crc kubenswrapper[4730]: I0131 17:07:23.740093 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c8a3d8-2892-4679-8962-0bd835970d44-utilities\") pod \"redhat-marketplace-p9g62\" (UID: \"67c8a3d8-2892-4679-8962-0bd835970d44\") " pod="openshift-marketplace/redhat-marketplace-p9g62" Jan 31 17:07:23 crc kubenswrapper[4730]: I0131 17:07:23.740145 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c8a3d8-2892-4679-8962-0bd835970d44-catalog-content\") pod \"redhat-marketplace-p9g62\" (UID: \"67c8a3d8-2892-4679-8962-0bd835970d44\") " pod="openshift-marketplace/redhat-marketplace-p9g62" Jan 31 17:07:23 crc kubenswrapper[4730]: I0131 17:07:23.841656 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsbgv\" (UniqueName: \"kubernetes.io/projected/67c8a3d8-2892-4679-8962-0bd835970d44-kube-api-access-hsbgv\") pod \"redhat-marketplace-p9g62\" (UID: \"67c8a3d8-2892-4679-8962-0bd835970d44\") " pod="openshift-marketplace/redhat-marketplace-p9g62" Jan 31 17:07:23 crc kubenswrapper[4730]: I0131 17:07:23.841747 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c8a3d8-2892-4679-8962-0bd835970d44-utilities\") pod \"redhat-marketplace-p9g62\" (UID: \"67c8a3d8-2892-4679-8962-0bd835970d44\") " pod="openshift-marketplace/redhat-marketplace-p9g62" Jan 31 17:07:23 crc kubenswrapper[4730]: I0131 17:07:23.841783 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c8a3d8-2892-4679-8962-0bd835970d44-catalog-content\") pod \"redhat-marketplace-p9g62\" (UID: \"67c8a3d8-2892-4679-8962-0bd835970d44\") " pod="openshift-marketplace/redhat-marketplace-p9g62" Jan 31 17:07:23 crc kubenswrapper[4730]: I0131 17:07:23.842305 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c8a3d8-2892-4679-8962-0bd835970d44-utilities\") pod \"redhat-marketplace-p9g62\" (UID: \"67c8a3d8-2892-4679-8962-0bd835970d44\") " pod="openshift-marketplace/redhat-marketplace-p9g62" Jan 31 17:07:23 crc kubenswrapper[4730]: I0131 17:07:23.842354 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c8a3d8-2892-4679-8962-0bd835970d44-catalog-content\") pod \"redhat-marketplace-p9g62\" (UID: \"67c8a3d8-2892-4679-8962-0bd835970d44\") " pod="openshift-marketplace/redhat-marketplace-p9g62" Jan 31 17:07:23 crc kubenswrapper[4730]: I0131 17:07:23.861613 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsbgv\" (UniqueName: \"kubernetes.io/projected/67c8a3d8-2892-4679-8962-0bd835970d44-kube-api-access-hsbgv\") pod \"redhat-marketplace-p9g62\" (UID: \"67c8a3d8-2892-4679-8962-0bd835970d44\") " pod="openshift-marketplace/redhat-marketplace-p9g62" Jan 31 17:07:24 crc kubenswrapper[4730]: I0131 17:07:24.007825 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p9g62" Jan 31 17:07:24 crc kubenswrapper[4730]: I0131 17:07:24.479259 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p9g62"] Jan 31 17:07:24 crc kubenswrapper[4730]: W0131 17:07:24.483758 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67c8a3d8_2892_4679_8962_0bd835970d44.slice/crio-83c79eddcd1666e8e73acfaf0d2f903d0703f35cffc7825847e831fc3bd15ba8 WatchSource:0}: Error finding container 83c79eddcd1666e8e73acfaf0d2f903d0703f35cffc7825847e831fc3bd15ba8: Status 404 returned error can't find the container with id 83c79eddcd1666e8e73acfaf0d2f903d0703f35cffc7825847e831fc3bd15ba8 Jan 31 17:07:24 crc kubenswrapper[4730]: I0131 17:07:24.701141 4730 generic.go:334] "Generic (PLEG): container finished" podID="67c8a3d8-2892-4679-8962-0bd835970d44" containerID="f78ee6aeaa23496bc4b9b65b7a847b5207ecaed852366759d65857bcb2e0541a" exitCode=0 Jan 31 17:07:24 crc kubenswrapper[4730]: I0131 17:07:24.701318 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p9g62" event={"ID":"67c8a3d8-2892-4679-8962-0bd835970d44","Type":"ContainerDied","Data":"f78ee6aeaa23496bc4b9b65b7a847b5207ecaed852366759d65857bcb2e0541a"} Jan 31 17:07:24 crc kubenswrapper[4730]: I0131 17:07:24.701502 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p9g62" event={"ID":"67c8a3d8-2892-4679-8962-0bd835970d44","Type":"ContainerStarted","Data":"83c79eddcd1666e8e73acfaf0d2f903d0703f35cffc7825847e831fc3bd15ba8"} Jan 31 17:07:24 crc kubenswrapper[4730]: I0131 17:07:24.703938 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 17:07:25 crc kubenswrapper[4730]: I0131 17:07:25.714305 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p9g62" event={"ID":"67c8a3d8-2892-4679-8962-0bd835970d44","Type":"ContainerStarted","Data":"52c6c0b1a7e939ab9d608079978dd139a042098418a315bc40daa24161081c37"} Jan 31 17:07:26 crc kubenswrapper[4730]: I0131 17:07:26.723642 4730 generic.go:334] "Generic (PLEG): container finished" podID="67c8a3d8-2892-4679-8962-0bd835970d44" containerID="52c6c0b1a7e939ab9d608079978dd139a042098418a315bc40daa24161081c37" exitCode=0 Jan 31 17:07:26 crc kubenswrapper[4730]: I0131 17:07:26.723723 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p9g62" event={"ID":"67c8a3d8-2892-4679-8962-0bd835970d44","Type":"ContainerDied","Data":"52c6c0b1a7e939ab9d608079978dd139a042098418a315bc40daa24161081c37"} Jan 31 17:07:27 crc kubenswrapper[4730]: I0131 17:07:27.735747 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p9g62" event={"ID":"67c8a3d8-2892-4679-8962-0bd835970d44","Type":"ContainerStarted","Data":"f79a4b15d963c29f85cbcd183726a8bef0bd21b89a17c1947aeb6f437dae9504"} Jan 31 17:07:27 crc kubenswrapper[4730]: I0131 17:07:27.755137 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p9g62" podStartSLOduration=2.325257676 podStartE2EDuration="4.755116042s" podCreationTimestamp="2026-01-31 17:07:23 +0000 UTC" firstStartedPulling="2026-01-31 17:07:24.70360317 +0000 UTC m=+2231.509660096" lastFinishedPulling="2026-01-31 17:07:27.133461536 +0000 UTC m=+2233.939518462" observedRunningTime="2026-01-31 17:07:27.751546961 +0000 UTC m=+2234.557603887" watchObservedRunningTime="2026-01-31 17:07:27.755116042 +0000 UTC m=+2234.561172958" Jan 31 17:07:31 crc kubenswrapper[4730]: I0131 17:07:31.464484 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:07:31 crc kubenswrapper[4730]: I0131 17:07:31.464996 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:07:31 crc kubenswrapper[4730]: E0131 17:07:31.465310 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:07:33 crc kubenswrapper[4730]: I0131 17:07:33.353032 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:07:33 crc kubenswrapper[4730]: E0131 17:07:33.353182 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 17:07:33 crc kubenswrapper[4730]: E0131 17:07:33.353444 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 17:09:35.353428097 +0000 UTC m=+2362.159485013 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 17:07:34 crc kubenswrapper[4730]: I0131 17:07:34.008737 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p9g62" Jan 31 17:07:34 crc kubenswrapper[4730]: I0131 17:07:34.009234 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p9g62" Jan 31 17:07:34 crc kubenswrapper[4730]: I0131 17:07:34.095152 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p9g62" Jan 31 17:07:34 crc kubenswrapper[4730]: I0131 17:07:34.466732 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:07:34 crc kubenswrapper[4730]: I0131 17:07:34.466891 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:07:34 crc kubenswrapper[4730]: I0131 17:07:34.466937 4730 scope.go:117] "RemoveContainer" containerID="9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6" Jan 31 17:07:34 crc kubenswrapper[4730]: I0131 17:07:34.467032 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:07:34 crc kubenswrapper[4730]: I0131 17:07:34.467045 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:07:34 crc kubenswrapper[4730]: E0131 17:07:34.468276 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:07:34 crc kubenswrapper[4730]: I0131 17:07:34.861008 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p9g62" Jan 31 17:07:34 crc kubenswrapper[4730]: I0131 17:07:34.939283 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p9g62"] Jan 31 17:07:36 crc kubenswrapper[4730]: I0131 17:07:36.811195 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p9g62" podUID="67c8a3d8-2892-4679-8962-0bd835970d44" containerName="registry-server" containerID="cri-o://f79a4b15d963c29f85cbcd183726a8bef0bd21b89a17c1947aeb6f437dae9504" gracePeriod=2 Jan 31 17:07:36 crc kubenswrapper[4730]: E0131 17:07:36.967703 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67c8a3d8_2892_4679_8962_0bd835970d44.slice/crio-f79a4b15d963c29f85cbcd183726a8bef0bd21b89a17c1947aeb6f437dae9504.scope\": RecentStats: unable to find data in memory cache]" Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.292890 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p9g62" Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.360520 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c8a3d8-2892-4679-8962-0bd835970d44-catalog-content\") pod \"67c8a3d8-2892-4679-8962-0bd835970d44\" (UID: \"67c8a3d8-2892-4679-8962-0bd835970d44\") " Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.360624 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c8a3d8-2892-4679-8962-0bd835970d44-utilities\") pod \"67c8a3d8-2892-4679-8962-0bd835970d44\" (UID: \"67c8a3d8-2892-4679-8962-0bd835970d44\") " Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.361059 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsbgv\" (UniqueName: \"kubernetes.io/projected/67c8a3d8-2892-4679-8962-0bd835970d44-kube-api-access-hsbgv\") pod \"67c8a3d8-2892-4679-8962-0bd835970d44\" (UID: \"67c8a3d8-2892-4679-8962-0bd835970d44\") " Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.361357 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c8a3d8-2892-4679-8962-0bd835970d44-utilities" (OuterVolumeSpecName: "utilities") pod "67c8a3d8-2892-4679-8962-0bd835970d44" (UID: "67c8a3d8-2892-4679-8962-0bd835970d44"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.361711 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c8a3d8-2892-4679-8962-0bd835970d44-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.368712 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c8a3d8-2892-4679-8962-0bd835970d44-kube-api-access-hsbgv" (OuterVolumeSpecName: "kube-api-access-hsbgv") pod "67c8a3d8-2892-4679-8962-0bd835970d44" (UID: "67c8a3d8-2892-4679-8962-0bd835970d44"). InnerVolumeSpecName "kube-api-access-hsbgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.392357 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c8a3d8-2892-4679-8962-0bd835970d44-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67c8a3d8-2892-4679-8962-0bd835970d44" (UID: "67c8a3d8-2892-4679-8962-0bd835970d44"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.463235 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsbgv\" (UniqueName: \"kubernetes.io/projected/67c8a3d8-2892-4679-8962-0bd835970d44-kube-api-access-hsbgv\") on node \"crc\" DevicePath \"\"" Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.463279 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c8a3d8-2892-4679-8962-0bd835970d44-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.821414 4730 generic.go:334] "Generic (PLEG): container finished" podID="67c8a3d8-2892-4679-8962-0bd835970d44" containerID="f79a4b15d963c29f85cbcd183726a8bef0bd21b89a17c1947aeb6f437dae9504" exitCode=0 Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.821593 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p9g62" event={"ID":"67c8a3d8-2892-4679-8962-0bd835970d44","Type":"ContainerDied","Data":"f79a4b15d963c29f85cbcd183726a8bef0bd21b89a17c1947aeb6f437dae9504"} Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.821745 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p9g62" event={"ID":"67c8a3d8-2892-4679-8962-0bd835970d44","Type":"ContainerDied","Data":"83c79eddcd1666e8e73acfaf0d2f903d0703f35cffc7825847e831fc3bd15ba8"} Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.821751 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p9g62" Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.821784 4730 scope.go:117] "RemoveContainer" containerID="f79a4b15d963c29f85cbcd183726a8bef0bd21b89a17c1947aeb6f437dae9504" Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.866064 4730 scope.go:117] "RemoveContainer" containerID="52c6c0b1a7e939ab9d608079978dd139a042098418a315bc40daa24161081c37" Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.872332 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p9g62"] Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.877477 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p9g62"] Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.895209 4730 scope.go:117] "RemoveContainer" containerID="f78ee6aeaa23496bc4b9b65b7a847b5207ecaed852366759d65857bcb2e0541a" Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.935797 4730 scope.go:117] "RemoveContainer" containerID="f79a4b15d963c29f85cbcd183726a8bef0bd21b89a17c1947aeb6f437dae9504" Jan 31 17:07:37 crc kubenswrapper[4730]: E0131 17:07:37.936468 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f79a4b15d963c29f85cbcd183726a8bef0bd21b89a17c1947aeb6f437dae9504\": container with ID starting with f79a4b15d963c29f85cbcd183726a8bef0bd21b89a17c1947aeb6f437dae9504 not found: ID does not exist" containerID="f79a4b15d963c29f85cbcd183726a8bef0bd21b89a17c1947aeb6f437dae9504" Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.936546 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f79a4b15d963c29f85cbcd183726a8bef0bd21b89a17c1947aeb6f437dae9504"} err="failed to get container status \"f79a4b15d963c29f85cbcd183726a8bef0bd21b89a17c1947aeb6f437dae9504\": rpc error: code = NotFound desc = could not find container \"f79a4b15d963c29f85cbcd183726a8bef0bd21b89a17c1947aeb6f437dae9504\": container with ID starting with f79a4b15d963c29f85cbcd183726a8bef0bd21b89a17c1947aeb6f437dae9504 not found: ID does not exist" Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.936642 4730 scope.go:117] "RemoveContainer" containerID="52c6c0b1a7e939ab9d608079978dd139a042098418a315bc40daa24161081c37" Jan 31 17:07:37 crc kubenswrapper[4730]: E0131 17:07:37.937105 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52c6c0b1a7e939ab9d608079978dd139a042098418a315bc40daa24161081c37\": container with ID starting with 52c6c0b1a7e939ab9d608079978dd139a042098418a315bc40daa24161081c37 not found: ID does not exist" containerID="52c6c0b1a7e939ab9d608079978dd139a042098418a315bc40daa24161081c37" Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.937167 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52c6c0b1a7e939ab9d608079978dd139a042098418a315bc40daa24161081c37"} err="failed to get container status \"52c6c0b1a7e939ab9d608079978dd139a042098418a315bc40daa24161081c37\": rpc error: code = NotFound desc = could not find container \"52c6c0b1a7e939ab9d608079978dd139a042098418a315bc40daa24161081c37\": container with ID starting with 52c6c0b1a7e939ab9d608079978dd139a042098418a315bc40daa24161081c37 not found: ID does not exist" Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.937221 4730 scope.go:117] "RemoveContainer" containerID="f78ee6aeaa23496bc4b9b65b7a847b5207ecaed852366759d65857bcb2e0541a" Jan 31 17:07:37 crc kubenswrapper[4730]: E0131 17:07:37.937773 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f78ee6aeaa23496bc4b9b65b7a847b5207ecaed852366759d65857bcb2e0541a\": container with ID starting with f78ee6aeaa23496bc4b9b65b7a847b5207ecaed852366759d65857bcb2e0541a not found: ID does not exist" containerID="f78ee6aeaa23496bc4b9b65b7a847b5207ecaed852366759d65857bcb2e0541a" Jan 31 17:07:37 crc kubenswrapper[4730]: I0131 17:07:37.937889 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78ee6aeaa23496bc4b9b65b7a847b5207ecaed852366759d65857bcb2e0541a"} err="failed to get container status \"f78ee6aeaa23496bc4b9b65b7a847b5207ecaed852366759d65857bcb2e0541a\": rpc error: code = NotFound desc = could not find container \"f78ee6aeaa23496bc4b9b65b7a847b5207ecaed852366759d65857bcb2e0541a\": container with ID starting with f78ee6aeaa23496bc4b9b65b7a847b5207ecaed852366759d65857bcb2e0541a not found: ID does not exist" Jan 31 17:07:38 crc kubenswrapper[4730]: I0131 17:07:38.478487 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c8a3d8-2892-4679-8962-0bd835970d44" path="/var/lib/kubelet/pods/67c8a3d8-2892-4679-8962-0bd835970d44/volumes" Jan 31 17:07:42 crc kubenswrapper[4730]: E0131 17:07:42.683910 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 17:07:42 crc kubenswrapper[4730]: I0131 17:07:42.872231 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:07:45 crc kubenswrapper[4730]: I0131 17:07:45.465110 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:07:45 crc kubenswrapper[4730]: I0131 17:07:45.465525 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:07:45 crc kubenswrapper[4730]: I0131 17:07:45.465568 4730 scope.go:117] "RemoveContainer" containerID="9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6" Jan 31 17:07:45 crc kubenswrapper[4730]: I0131 17:07:45.465662 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:07:45 crc kubenswrapper[4730]: I0131 17:07:45.465674 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:07:45 crc kubenswrapper[4730]: E0131 17:07:45.466306 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:07:46 crc kubenswrapper[4730]: I0131 17:07:46.465069 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:07:46 crc kubenswrapper[4730]: I0131 17:07:46.465649 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:07:46 crc kubenswrapper[4730]: E0131 17:07:46.467071 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.091572 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c82xw"] Jan 31 17:07:55 crc kubenswrapper[4730]: E0131 17:07:55.093527 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c8a3d8-2892-4679-8962-0bd835970d44" containerName="registry-server" Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.093632 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c8a3d8-2892-4679-8962-0bd835970d44" containerName="registry-server" Jan 31 17:07:55 crc kubenswrapper[4730]: E0131 17:07:55.093731 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c8a3d8-2892-4679-8962-0bd835970d44" containerName="extract-content" Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.093847 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c8a3d8-2892-4679-8962-0bd835970d44" containerName="extract-content" Jan 31 17:07:55 crc kubenswrapper[4730]: E0131 17:07:55.093952 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c8a3d8-2892-4679-8962-0bd835970d44" containerName="extract-utilities" Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.094030 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c8a3d8-2892-4679-8962-0bd835970d44" containerName="extract-utilities" Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.094336 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c8a3d8-2892-4679-8962-0bd835970d44" containerName="registry-server" Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.096494 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c82xw" Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.111734 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c82xw"] Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.267348 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-catalog-content\") pod \"community-operators-c82xw\" (UID: \"2e3b5a8a-afa9-4c03-a74b-7b53185ff829\") " pod="openshift-marketplace/community-operators-c82xw" Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.267411 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcbw8\" (UniqueName: \"kubernetes.io/projected/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-kube-api-access-bcbw8\") pod \"community-operators-c82xw\" (UID: \"2e3b5a8a-afa9-4c03-a74b-7b53185ff829\") " pod="openshift-marketplace/community-operators-c82xw" Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.267571 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-utilities\") pod \"community-operators-c82xw\" (UID: \"2e3b5a8a-afa9-4c03-a74b-7b53185ff829\") " pod="openshift-marketplace/community-operators-c82xw" Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.369379 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-utilities\") pod \"community-operators-c82xw\" (UID: \"2e3b5a8a-afa9-4c03-a74b-7b53185ff829\") " pod="openshift-marketplace/community-operators-c82xw" Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.369488 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-catalog-content\") pod \"community-operators-c82xw\" (UID: \"2e3b5a8a-afa9-4c03-a74b-7b53185ff829\") " pod="openshift-marketplace/community-operators-c82xw" Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.369508 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcbw8\" (UniqueName: \"kubernetes.io/projected/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-kube-api-access-bcbw8\") pod \"community-operators-c82xw\" (UID: \"2e3b5a8a-afa9-4c03-a74b-7b53185ff829\") " pod="openshift-marketplace/community-operators-c82xw" Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.369943 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-utilities\") pod \"community-operators-c82xw\" (UID: \"2e3b5a8a-afa9-4c03-a74b-7b53185ff829\") " pod="openshift-marketplace/community-operators-c82xw" Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.370128 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-catalog-content\") pod \"community-operators-c82xw\" (UID: \"2e3b5a8a-afa9-4c03-a74b-7b53185ff829\") " pod="openshift-marketplace/community-operators-c82xw" Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.392468 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcbw8\" (UniqueName: \"kubernetes.io/projected/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-kube-api-access-bcbw8\") pod \"community-operators-c82xw\" (UID: \"2e3b5a8a-afa9-4c03-a74b-7b53185ff829\") " pod="openshift-marketplace/community-operators-c82xw" Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.422882 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c82xw" Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.771573 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c82xw"] Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.991407 4730 generic.go:334] "Generic (PLEG): container finished" podID="2e3b5a8a-afa9-4c03-a74b-7b53185ff829" containerID="cac6fee1dfb9a9a5cc7601249538e7d8b9f32957887fbbb0154cdbc9a34e6cf3" exitCode=0 Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.991579 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c82xw" event={"ID":"2e3b5a8a-afa9-4c03-a74b-7b53185ff829","Type":"ContainerDied","Data":"cac6fee1dfb9a9a5cc7601249538e7d8b9f32957887fbbb0154cdbc9a34e6cf3"} Jan 31 17:07:55 crc kubenswrapper[4730]: I0131 17:07:55.991776 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c82xw" event={"ID":"2e3b5a8a-afa9-4c03-a74b-7b53185ff829","Type":"ContainerStarted","Data":"7f8fb647e83ad45bce7e147b13fa274d5d3b93e06d9453d95fd922ade68e098e"} Jan 31 17:07:57 crc kubenswrapper[4730]: I0131 17:07:57.002607 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c82xw" event={"ID":"2e3b5a8a-afa9-4c03-a74b-7b53185ff829","Type":"ContainerStarted","Data":"303a4de83addfad9d1d461a7bc767a01683acb0e6168178b9237167ba5ccd93d"} Jan 31 17:07:57 crc kubenswrapper[4730]: I0131 17:07:57.494275 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nxfhh"] Jan 31 17:07:57 crc kubenswrapper[4730]: I0131 17:07:57.499854 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:07:57 crc kubenswrapper[4730]: I0131 17:07:57.514695 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nxfhh"] Jan 31 17:07:57 crc kubenswrapper[4730]: I0131 17:07:57.609493 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f4062a-9d13-4d85-bea4-1eebfc32260e-utilities\") pod \"redhat-operators-nxfhh\" (UID: \"61f4062a-9d13-4d85-bea4-1eebfc32260e\") " pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:07:57 crc kubenswrapper[4730]: I0131 17:07:57.609579 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp76h\" (UniqueName: \"kubernetes.io/projected/61f4062a-9d13-4d85-bea4-1eebfc32260e-kube-api-access-qp76h\") pod \"redhat-operators-nxfhh\" (UID: \"61f4062a-9d13-4d85-bea4-1eebfc32260e\") " pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:07:57 crc kubenswrapper[4730]: I0131 17:07:57.609642 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f4062a-9d13-4d85-bea4-1eebfc32260e-catalog-content\") pod \"redhat-operators-nxfhh\" (UID: \"61f4062a-9d13-4d85-bea4-1eebfc32260e\") " pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:07:57 crc kubenswrapper[4730]: I0131 17:07:57.711928 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f4062a-9d13-4d85-bea4-1eebfc32260e-utilities\") pod \"redhat-operators-nxfhh\" (UID: \"61f4062a-9d13-4d85-bea4-1eebfc32260e\") " pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:07:57 crc kubenswrapper[4730]: I0131 17:07:57.711989 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp76h\" (UniqueName: \"kubernetes.io/projected/61f4062a-9d13-4d85-bea4-1eebfc32260e-kube-api-access-qp76h\") pod \"redhat-operators-nxfhh\" (UID: \"61f4062a-9d13-4d85-bea4-1eebfc32260e\") " pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:07:57 crc kubenswrapper[4730]: I0131 17:07:57.712040 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f4062a-9d13-4d85-bea4-1eebfc32260e-catalog-content\") pod \"redhat-operators-nxfhh\" (UID: \"61f4062a-9d13-4d85-bea4-1eebfc32260e\") " pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:07:57 crc kubenswrapper[4730]: I0131 17:07:57.712392 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f4062a-9d13-4d85-bea4-1eebfc32260e-utilities\") pod \"redhat-operators-nxfhh\" (UID: \"61f4062a-9d13-4d85-bea4-1eebfc32260e\") " pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:07:57 crc kubenswrapper[4730]: I0131 17:07:57.712508 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f4062a-9d13-4d85-bea4-1eebfc32260e-catalog-content\") pod \"redhat-operators-nxfhh\" (UID: \"61f4062a-9d13-4d85-bea4-1eebfc32260e\") " pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:07:57 crc kubenswrapper[4730]: I0131 17:07:57.733093 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp76h\" (UniqueName: \"kubernetes.io/projected/61f4062a-9d13-4d85-bea4-1eebfc32260e-kube-api-access-qp76h\") pod \"redhat-operators-nxfhh\" (UID: \"61f4062a-9d13-4d85-bea4-1eebfc32260e\") " pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:07:57 crc kubenswrapper[4730]: I0131 17:07:57.824292 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:07:58 crc kubenswrapper[4730]: I0131 17:07:58.103488 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nxfhh"] Jan 31 17:07:58 crc kubenswrapper[4730]: W0131 17:07:58.113874 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61f4062a_9d13_4d85_bea4_1eebfc32260e.slice/crio-37c69755c64756224f7a301b6e2fe2a629161affe026e17809c22ce845a19462 WatchSource:0}: Error finding container 37c69755c64756224f7a301b6e2fe2a629161affe026e17809c22ce845a19462: Status 404 returned error can't find the container with id 37c69755c64756224f7a301b6e2fe2a629161affe026e17809c22ce845a19462 Jan 31 17:07:58 crc kubenswrapper[4730]: I0131 17:07:58.464227 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:07:58 crc kubenswrapper[4730]: I0131 17:07:58.464251 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:07:58 crc kubenswrapper[4730]: E0131 17:07:58.464502 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:07:58 crc kubenswrapper[4730]: I0131 17:07:58.464707 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:07:58 crc kubenswrapper[4730]: I0131 17:07:58.464770 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:07:58 crc kubenswrapper[4730]: I0131 17:07:58.464794 4730 scope.go:117] "RemoveContainer" containerID="9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6" Jan 31 17:07:58 crc kubenswrapper[4730]: I0131 17:07:58.464858 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:07:58 crc kubenswrapper[4730]: I0131 17:07:58.464868 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:07:58 crc kubenswrapper[4730]: E0131 17:07:58.465213 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:07:59 crc kubenswrapper[4730]: I0131 17:07:59.018172 4730 generic.go:334] "Generic (PLEG): container finished" podID="61f4062a-9d13-4d85-bea4-1eebfc32260e" containerID="2ff3e85cab0a8d2991431f2c1b9aa56c9b6e004dda6d9a463005ddb05b73ed9e" exitCode=0 Jan 31 17:07:59 crc kubenswrapper[4730]: I0131 17:07:59.018268 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxfhh" event={"ID":"61f4062a-9d13-4d85-bea4-1eebfc32260e","Type":"ContainerDied","Data":"2ff3e85cab0a8d2991431f2c1b9aa56c9b6e004dda6d9a463005ddb05b73ed9e"} Jan 31 17:07:59 crc kubenswrapper[4730]: I0131 17:07:59.018482 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxfhh" event={"ID":"61f4062a-9d13-4d85-bea4-1eebfc32260e","Type":"ContainerStarted","Data":"37c69755c64756224f7a301b6e2fe2a629161affe026e17809c22ce845a19462"} Jan 31 17:07:59 crc kubenswrapper[4730]: I0131 17:07:59.022183 4730 generic.go:334] "Generic (PLEG): container finished" podID="2e3b5a8a-afa9-4c03-a74b-7b53185ff829" containerID="303a4de83addfad9d1d461a7bc767a01683acb0e6168178b9237167ba5ccd93d" exitCode=0 Jan 31 17:07:59 crc kubenswrapper[4730]: I0131 17:07:59.022223 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c82xw" event={"ID":"2e3b5a8a-afa9-4c03-a74b-7b53185ff829","Type":"ContainerDied","Data":"303a4de83addfad9d1d461a7bc767a01683acb0e6168178b9237167ba5ccd93d"} Jan 31 17:08:00 crc kubenswrapper[4730]: I0131 17:08:00.036790 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxfhh" event={"ID":"61f4062a-9d13-4d85-bea4-1eebfc32260e","Type":"ContainerStarted","Data":"cee82ada92227b45aed9fa0d08f055961bda350c38c00bc4f8f40c063c758659"} Jan 31 17:08:00 crc kubenswrapper[4730]: I0131 17:08:00.041094 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c82xw" event={"ID":"2e3b5a8a-afa9-4c03-a74b-7b53185ff829","Type":"ContainerStarted","Data":"268089dd68cb3408c51b4862b9dd71d0f7c021a9c018f384db207507dc44d95f"} Jan 31 17:08:00 crc kubenswrapper[4730]: I0131 17:08:00.098765 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c82xw" podStartSLOduration=1.676790514 podStartE2EDuration="5.098734709s" podCreationTimestamp="2026-01-31 17:07:55 +0000 UTC" firstStartedPulling="2026-01-31 17:07:55.99344007 +0000 UTC m=+2262.799496986" lastFinishedPulling="2026-01-31 17:07:59.415384275 +0000 UTC m=+2266.221441181" observedRunningTime="2026-01-31 17:08:00.090878898 +0000 UTC m=+2266.896935854" watchObservedRunningTime="2026-01-31 17:08:00.098734709 +0000 UTC m=+2266.904791665" Jan 31 17:08:05 crc kubenswrapper[4730]: I0131 17:08:05.115729 4730 generic.go:334] "Generic (PLEG): container finished" podID="61f4062a-9d13-4d85-bea4-1eebfc32260e" containerID="cee82ada92227b45aed9fa0d08f055961bda350c38c00bc4f8f40c063c758659" exitCode=0 Jan 31 17:08:05 crc kubenswrapper[4730]: I0131 17:08:05.115896 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxfhh" event={"ID":"61f4062a-9d13-4d85-bea4-1eebfc32260e","Type":"ContainerDied","Data":"cee82ada92227b45aed9fa0d08f055961bda350c38c00bc4f8f40c063c758659"} Jan 31 17:08:05 crc kubenswrapper[4730]: I0131 17:08:05.424169 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c82xw" Jan 31 17:08:05 crc kubenswrapper[4730]: I0131 17:08:05.424251 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c82xw" Jan 31 17:08:05 crc kubenswrapper[4730]: I0131 17:08:05.491060 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c82xw" Jan 31 17:08:06 crc kubenswrapper[4730]: I0131 17:08:06.138486 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxfhh" event={"ID":"61f4062a-9d13-4d85-bea4-1eebfc32260e","Type":"ContainerStarted","Data":"fb49f6438ccfcc5d499a08d2e1e4d225725cdd2fdd7754c66daf850892fbea2b"} Jan 31 17:08:06 crc kubenswrapper[4730]: I0131 17:08:06.195512 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c82xw" Jan 31 17:08:06 crc kubenswrapper[4730]: I0131 17:08:06.197704 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nxfhh" podStartSLOduration=2.698764914 podStartE2EDuration="9.197688626s" podCreationTimestamp="2026-01-31 17:07:57 +0000 UTC" firstStartedPulling="2026-01-31 17:07:59.021005488 +0000 UTC m=+2265.827062404" lastFinishedPulling="2026-01-31 17:08:05.51992915 +0000 UTC m=+2272.325986116" observedRunningTime="2026-01-31 17:08:06.16377589 +0000 UTC m=+2272.969832846" watchObservedRunningTime="2026-01-31 17:08:06.197688626 +0000 UTC m=+2273.003745552" Jan 31 17:08:07 crc kubenswrapper[4730]: I0131 17:08:07.673865 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c82xw"] Jan 31 17:08:07 crc kubenswrapper[4730]: I0131 17:08:07.824868 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:08:07 crc kubenswrapper[4730]: I0131 17:08:07.824920 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:08:08 crc kubenswrapper[4730]: I0131 17:08:08.159157 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c82xw" podUID="2e3b5a8a-afa9-4c03-a74b-7b53185ff829" containerName="registry-server" containerID="cri-o://268089dd68cb3408c51b4862b9dd71d0f7c021a9c018f384db207507dc44d95f" gracePeriod=2 Jan 31 17:08:08 crc kubenswrapper[4730]: I0131 17:08:08.603898 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c82xw" Jan 31 17:08:08 crc kubenswrapper[4730]: I0131 17:08:08.746746 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcbw8\" (UniqueName: \"kubernetes.io/projected/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-kube-api-access-bcbw8\") pod \"2e3b5a8a-afa9-4c03-a74b-7b53185ff829\" (UID: \"2e3b5a8a-afa9-4c03-a74b-7b53185ff829\") " Jan 31 17:08:08 crc kubenswrapper[4730]: I0131 17:08:08.746823 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-catalog-content\") pod \"2e3b5a8a-afa9-4c03-a74b-7b53185ff829\" (UID: \"2e3b5a8a-afa9-4c03-a74b-7b53185ff829\") " Jan 31 17:08:08 crc kubenswrapper[4730]: I0131 17:08:08.746922 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-utilities\") pod \"2e3b5a8a-afa9-4c03-a74b-7b53185ff829\" (UID: \"2e3b5a8a-afa9-4c03-a74b-7b53185ff829\") " Jan 31 17:08:08 crc kubenswrapper[4730]: I0131 17:08:08.748083 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-utilities" (OuterVolumeSpecName: "utilities") pod "2e3b5a8a-afa9-4c03-a74b-7b53185ff829" (UID: "2e3b5a8a-afa9-4c03-a74b-7b53185ff829"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 17:08:08 crc kubenswrapper[4730]: I0131 17:08:08.753714 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-kube-api-access-bcbw8" (OuterVolumeSpecName: "kube-api-access-bcbw8") pod "2e3b5a8a-afa9-4c03-a74b-7b53185ff829" (UID: "2e3b5a8a-afa9-4c03-a74b-7b53185ff829"). InnerVolumeSpecName "kube-api-access-bcbw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 17:08:08 crc kubenswrapper[4730]: I0131 17:08:08.797390 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e3b5a8a-afa9-4c03-a74b-7b53185ff829" (UID: "2e3b5a8a-afa9-4c03-a74b-7b53185ff829"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 17:08:08 crc kubenswrapper[4730]: I0131 17:08:08.849408 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcbw8\" (UniqueName: \"kubernetes.io/projected/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-kube-api-access-bcbw8\") on node \"crc\" DevicePath \"\"" Jan 31 17:08:08 crc kubenswrapper[4730]: I0131 17:08:08.849444 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 17:08:08 crc kubenswrapper[4730]: I0131 17:08:08.849454 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e3b5a8a-afa9-4c03-a74b-7b53185ff829-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 17:08:08 crc kubenswrapper[4730]: I0131 17:08:08.898286 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nxfhh" podUID="61f4062a-9d13-4d85-bea4-1eebfc32260e" containerName="registry-server" probeResult="failure" output=< Jan 31 17:08:08 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Jan 31 17:08:08 crc kubenswrapper[4730]: > Jan 31 17:08:09 crc kubenswrapper[4730]: I0131 17:08:09.168839 4730 generic.go:334] "Generic (PLEG): container finished" podID="2e3b5a8a-afa9-4c03-a74b-7b53185ff829" containerID="268089dd68cb3408c51b4862b9dd71d0f7c021a9c018f384db207507dc44d95f" exitCode=0 Jan 31 17:08:09 crc kubenswrapper[4730]: I0131 17:08:09.168888 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c82xw" Jan 31 17:08:09 crc kubenswrapper[4730]: I0131 17:08:09.168884 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c82xw" event={"ID":"2e3b5a8a-afa9-4c03-a74b-7b53185ff829","Type":"ContainerDied","Data":"268089dd68cb3408c51b4862b9dd71d0f7c021a9c018f384db207507dc44d95f"} Jan 31 17:08:09 crc kubenswrapper[4730]: I0131 17:08:09.169235 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c82xw" event={"ID":"2e3b5a8a-afa9-4c03-a74b-7b53185ff829","Type":"ContainerDied","Data":"7f8fb647e83ad45bce7e147b13fa274d5d3b93e06d9453d95fd922ade68e098e"} Jan 31 17:08:09 crc kubenswrapper[4730]: I0131 17:08:09.169260 4730 scope.go:117] "RemoveContainer" containerID="268089dd68cb3408c51b4862b9dd71d0f7c021a9c018f384db207507dc44d95f" Jan 31 17:08:09 crc kubenswrapper[4730]: I0131 17:08:09.191106 4730 scope.go:117] "RemoveContainer" containerID="303a4de83addfad9d1d461a7bc767a01683acb0e6168178b9237167ba5ccd93d" Jan 31 17:08:09 crc kubenswrapper[4730]: I0131 17:08:09.204171 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c82xw"] Jan 31 17:08:09 crc kubenswrapper[4730]: I0131 17:08:09.214152 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c82xw"] Jan 31 17:08:09 crc kubenswrapper[4730]: I0131 17:08:09.221180 4730 scope.go:117] "RemoveContainer" containerID="cac6fee1dfb9a9a5cc7601249538e7d8b9f32957887fbbb0154cdbc9a34e6cf3" Jan 31 17:08:09 crc kubenswrapper[4730]: I0131 17:08:09.267242 4730 scope.go:117] "RemoveContainer" containerID="268089dd68cb3408c51b4862b9dd71d0f7c021a9c018f384db207507dc44d95f" Jan 31 17:08:09 crc kubenswrapper[4730]: E0131 17:08:09.268470 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"268089dd68cb3408c51b4862b9dd71d0f7c021a9c018f384db207507dc44d95f\": container with ID starting with 268089dd68cb3408c51b4862b9dd71d0f7c021a9c018f384db207507dc44d95f not found: ID does not exist" containerID="268089dd68cb3408c51b4862b9dd71d0f7c021a9c018f384db207507dc44d95f" Jan 31 17:08:09 crc kubenswrapper[4730]: I0131 17:08:09.268499 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"268089dd68cb3408c51b4862b9dd71d0f7c021a9c018f384db207507dc44d95f"} err="failed to get container status \"268089dd68cb3408c51b4862b9dd71d0f7c021a9c018f384db207507dc44d95f\": rpc error: code = NotFound desc = could not find container \"268089dd68cb3408c51b4862b9dd71d0f7c021a9c018f384db207507dc44d95f\": container with ID starting with 268089dd68cb3408c51b4862b9dd71d0f7c021a9c018f384db207507dc44d95f not found: ID does not exist" Jan 31 17:08:09 crc kubenswrapper[4730]: I0131 17:08:09.268519 4730 scope.go:117] "RemoveContainer" containerID="303a4de83addfad9d1d461a7bc767a01683acb0e6168178b9237167ba5ccd93d" Jan 31 17:08:09 crc kubenswrapper[4730]: E0131 17:08:09.272246 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"303a4de83addfad9d1d461a7bc767a01683acb0e6168178b9237167ba5ccd93d\": container with ID starting with 303a4de83addfad9d1d461a7bc767a01683acb0e6168178b9237167ba5ccd93d not found: ID does not exist" containerID="303a4de83addfad9d1d461a7bc767a01683acb0e6168178b9237167ba5ccd93d" Jan 31 17:08:09 crc kubenswrapper[4730]: I0131 17:08:09.272270 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"303a4de83addfad9d1d461a7bc767a01683acb0e6168178b9237167ba5ccd93d"} err="failed to get container status \"303a4de83addfad9d1d461a7bc767a01683acb0e6168178b9237167ba5ccd93d\": rpc error: code = NotFound desc = could not find container \"303a4de83addfad9d1d461a7bc767a01683acb0e6168178b9237167ba5ccd93d\": container with ID starting with 303a4de83addfad9d1d461a7bc767a01683acb0e6168178b9237167ba5ccd93d not found: ID does not exist" Jan 31 17:08:09 crc kubenswrapper[4730]: I0131 17:08:09.272288 4730 scope.go:117] "RemoveContainer" containerID="cac6fee1dfb9a9a5cc7601249538e7d8b9f32957887fbbb0154cdbc9a34e6cf3" Jan 31 17:08:09 crc kubenswrapper[4730]: E0131 17:08:09.272623 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cac6fee1dfb9a9a5cc7601249538e7d8b9f32957887fbbb0154cdbc9a34e6cf3\": container with ID starting with cac6fee1dfb9a9a5cc7601249538e7d8b9f32957887fbbb0154cdbc9a34e6cf3 not found: ID does not exist" containerID="cac6fee1dfb9a9a5cc7601249538e7d8b9f32957887fbbb0154cdbc9a34e6cf3" Jan 31 17:08:09 crc kubenswrapper[4730]: I0131 17:08:09.272645 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cac6fee1dfb9a9a5cc7601249538e7d8b9f32957887fbbb0154cdbc9a34e6cf3"} err="failed to get container status \"cac6fee1dfb9a9a5cc7601249538e7d8b9f32957887fbbb0154cdbc9a34e6cf3\": rpc error: code = NotFound desc = could not find container \"cac6fee1dfb9a9a5cc7601249538e7d8b9f32957887fbbb0154cdbc9a34e6cf3\": container with ID starting with cac6fee1dfb9a9a5cc7601249538e7d8b9f32957887fbbb0154cdbc9a34e6cf3 not found: ID does not exist" Jan 31 17:08:09 crc kubenswrapper[4730]: I0131 17:08:09.463477 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:08:09 crc kubenswrapper[4730]: I0131 17:08:09.463502 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:08:09 crc kubenswrapper[4730]: E0131 17:08:09.464629 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:08:10 crc kubenswrapper[4730]: I0131 17:08:10.473724 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e3b5a8a-afa9-4c03-a74b-7b53185ff829" path="/var/lib/kubelet/pods/2e3b5a8a-afa9-4c03-a74b-7b53185ff829/volumes" Jan 31 17:08:13 crc kubenswrapper[4730]: I0131 17:08:13.464973 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:08:13 crc kubenswrapper[4730]: I0131 17:08:13.465574 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:08:13 crc kubenswrapper[4730]: I0131 17:08:13.465603 4730 scope.go:117] "RemoveContainer" containerID="9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6" Jan 31 17:08:13 crc kubenswrapper[4730]: I0131 17:08:13.465663 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:08:13 crc kubenswrapper[4730]: I0131 17:08:13.465672 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:08:13 crc kubenswrapper[4730]: E0131 17:08:13.466074 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:08:17 crc kubenswrapper[4730]: I0131 17:08:17.884053 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:08:17 crc kubenswrapper[4730]: I0131 17:08:17.937501 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:08:18 crc kubenswrapper[4730]: I0131 17:08:18.125507 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nxfhh"] Jan 31 17:08:19 crc kubenswrapper[4730]: I0131 17:08:19.447929 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nxfhh" podUID="61f4062a-9d13-4d85-bea4-1eebfc32260e" containerName="registry-server" containerID="cri-o://fb49f6438ccfcc5d499a08d2e1e4d225725cdd2fdd7754c66daf850892fbea2b" gracePeriod=2 Jan 31 17:08:19 crc kubenswrapper[4730]: I0131 17:08:19.946778 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.137067 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f4062a-9d13-4d85-bea4-1eebfc32260e-catalog-content\") pod \"61f4062a-9d13-4d85-bea4-1eebfc32260e\" (UID: \"61f4062a-9d13-4d85-bea4-1eebfc32260e\") " Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.137201 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp76h\" (UniqueName: \"kubernetes.io/projected/61f4062a-9d13-4d85-bea4-1eebfc32260e-kube-api-access-qp76h\") pod \"61f4062a-9d13-4d85-bea4-1eebfc32260e\" (UID: \"61f4062a-9d13-4d85-bea4-1eebfc32260e\") " Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.137426 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f4062a-9d13-4d85-bea4-1eebfc32260e-utilities\") pod \"61f4062a-9d13-4d85-bea4-1eebfc32260e\" (UID: \"61f4062a-9d13-4d85-bea4-1eebfc32260e\") " Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.138309 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61f4062a-9d13-4d85-bea4-1eebfc32260e-utilities" (OuterVolumeSpecName: "utilities") pod "61f4062a-9d13-4d85-bea4-1eebfc32260e" (UID: "61f4062a-9d13-4d85-bea4-1eebfc32260e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.138845 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f4062a-9d13-4d85-bea4-1eebfc32260e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.145837 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f4062a-9d13-4d85-bea4-1eebfc32260e-kube-api-access-qp76h" (OuterVolumeSpecName: "kube-api-access-qp76h") pod "61f4062a-9d13-4d85-bea4-1eebfc32260e" (UID: "61f4062a-9d13-4d85-bea4-1eebfc32260e"). InnerVolumeSpecName "kube-api-access-qp76h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.240264 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp76h\" (UniqueName: \"kubernetes.io/projected/61f4062a-9d13-4d85-bea4-1eebfc32260e-kube-api-access-qp76h\") on node \"crc\" DevicePath \"\"" Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.282006 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61f4062a-9d13-4d85-bea4-1eebfc32260e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61f4062a-9d13-4d85-bea4-1eebfc32260e" (UID: "61f4062a-9d13-4d85-bea4-1eebfc32260e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.341212 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f4062a-9d13-4d85-bea4-1eebfc32260e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.456524 4730 generic.go:334] "Generic (PLEG): container finished" podID="61f4062a-9d13-4d85-bea4-1eebfc32260e" containerID="fb49f6438ccfcc5d499a08d2e1e4d225725cdd2fdd7754c66daf850892fbea2b" exitCode=0 Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.456569 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxfhh" event={"ID":"61f4062a-9d13-4d85-bea4-1eebfc32260e","Type":"ContainerDied","Data":"fb49f6438ccfcc5d499a08d2e1e4d225725cdd2fdd7754c66daf850892fbea2b"} Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.456593 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nxfhh" event={"ID":"61f4062a-9d13-4d85-bea4-1eebfc32260e","Type":"ContainerDied","Data":"37c69755c64756224f7a301b6e2fe2a629161affe026e17809c22ce845a19462"} Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.456609 4730 scope.go:117] "RemoveContainer" containerID="fb49f6438ccfcc5d499a08d2e1e4d225725cdd2fdd7754c66daf850892fbea2b" Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.456639 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.479765 4730 scope.go:117] "RemoveContainer" containerID="cee82ada92227b45aed9fa0d08f055961bda350c38c00bc4f8f40c063c758659" Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.500253 4730 scope.go:117] "RemoveContainer" containerID="2ff3e85cab0a8d2991431f2c1b9aa56c9b6e004dda6d9a463005ddb05b73ed9e" Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.544101 4730 scope.go:117] "RemoveContainer" containerID="fb49f6438ccfcc5d499a08d2e1e4d225725cdd2fdd7754c66daf850892fbea2b" Jan 31 17:08:20 crc kubenswrapper[4730]: E0131 17:08:20.544603 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb49f6438ccfcc5d499a08d2e1e4d225725cdd2fdd7754c66daf850892fbea2b\": container with ID starting with fb49f6438ccfcc5d499a08d2e1e4d225725cdd2fdd7754c66daf850892fbea2b not found: ID does not exist" containerID="fb49f6438ccfcc5d499a08d2e1e4d225725cdd2fdd7754c66daf850892fbea2b" Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.544640 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb49f6438ccfcc5d499a08d2e1e4d225725cdd2fdd7754c66daf850892fbea2b"} err="failed to get container status \"fb49f6438ccfcc5d499a08d2e1e4d225725cdd2fdd7754c66daf850892fbea2b\": rpc error: code = NotFound desc = could not find container \"fb49f6438ccfcc5d499a08d2e1e4d225725cdd2fdd7754c66daf850892fbea2b\": container with ID starting with fb49f6438ccfcc5d499a08d2e1e4d225725cdd2fdd7754c66daf850892fbea2b not found: ID does not exist" Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.544775 4730 scope.go:117] "RemoveContainer" containerID="cee82ada92227b45aed9fa0d08f055961bda350c38c00bc4f8f40c063c758659" Jan 31 17:08:20 crc kubenswrapper[4730]: E0131 17:08:20.545117 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cee82ada92227b45aed9fa0d08f055961bda350c38c00bc4f8f40c063c758659\": container with ID starting with cee82ada92227b45aed9fa0d08f055961bda350c38c00bc4f8f40c063c758659 not found: ID does not exist" containerID="cee82ada92227b45aed9fa0d08f055961bda350c38c00bc4f8f40c063c758659" Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.545161 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cee82ada92227b45aed9fa0d08f055961bda350c38c00bc4f8f40c063c758659"} err="failed to get container status \"cee82ada92227b45aed9fa0d08f055961bda350c38c00bc4f8f40c063c758659\": rpc error: code = NotFound desc = could not find container \"cee82ada92227b45aed9fa0d08f055961bda350c38c00bc4f8f40c063c758659\": container with ID starting with cee82ada92227b45aed9fa0d08f055961bda350c38c00bc4f8f40c063c758659 not found: ID does not exist" Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.545178 4730 scope.go:117] "RemoveContainer" containerID="2ff3e85cab0a8d2991431f2c1b9aa56c9b6e004dda6d9a463005ddb05b73ed9e" Jan 31 17:08:20 crc kubenswrapper[4730]: E0131 17:08:20.545469 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff3e85cab0a8d2991431f2c1b9aa56c9b6e004dda6d9a463005ddb05b73ed9e\": container with ID starting with 2ff3e85cab0a8d2991431f2c1b9aa56c9b6e004dda6d9a463005ddb05b73ed9e not found: ID does not exist" containerID="2ff3e85cab0a8d2991431f2c1b9aa56c9b6e004dda6d9a463005ddb05b73ed9e" Jan 31 17:08:20 crc kubenswrapper[4730]: I0131 17:08:20.545512 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff3e85cab0a8d2991431f2c1b9aa56c9b6e004dda6d9a463005ddb05b73ed9e"} err="failed to get container status \"2ff3e85cab0a8d2991431f2c1b9aa56c9b6e004dda6d9a463005ddb05b73ed9e\": rpc error: code = NotFound desc = could not find container \"2ff3e85cab0a8d2991431f2c1b9aa56c9b6e004dda6d9a463005ddb05b73ed9e\": container with ID starting with 2ff3e85cab0a8d2991431f2c1b9aa56c9b6e004dda6d9a463005ddb05b73ed9e not found: ID does not exist" Jan 31 17:08:23 crc kubenswrapper[4730]: I0131 17:08:23.465620 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:08:23 crc kubenswrapper[4730]: I0131 17:08:23.466408 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:08:23 crc kubenswrapper[4730]: E0131 17:08:23.466992 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:08:26 crc kubenswrapper[4730]: I0131 17:08:26.469251 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:08:26 crc kubenswrapper[4730]: I0131 17:08:26.469339 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:08:26 crc kubenswrapper[4730]: I0131 17:08:26.469370 4730 scope.go:117] "RemoveContainer" containerID="9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6" Jan 31 17:08:26 crc kubenswrapper[4730]: I0131 17:08:26.469429 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:08:26 crc kubenswrapper[4730]: I0131 17:08:26.469441 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:08:26 crc kubenswrapper[4730]: E0131 17:08:26.470029 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:08:38 crc kubenswrapper[4730]: I0131 17:08:38.464444 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:08:38 crc kubenswrapper[4730]: I0131 17:08:38.465265 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:08:38 crc kubenswrapper[4730]: E0131 17:08:38.465504 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:08:39 crc kubenswrapper[4730]: I0131 17:08:39.465787 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:08:39 crc kubenswrapper[4730]: I0131 17:08:39.466320 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:08:39 crc kubenswrapper[4730]: I0131 17:08:39.466364 4730 scope.go:117] "RemoveContainer" containerID="9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6" Jan 31 17:08:39 crc kubenswrapper[4730]: I0131 17:08:39.466460 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:08:39 crc kubenswrapper[4730]: I0131 17:08:39.466473 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:08:39 crc kubenswrapper[4730]: E0131 17:08:39.749240 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:08:40 crc kubenswrapper[4730]: I0131 17:08:40.649284 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1"} Jan 31 17:08:40 crc kubenswrapper[4730]: I0131 17:08:40.650666 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:08:40 crc kubenswrapper[4730]: I0131 17:08:40.650826 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:08:40 crc kubenswrapper[4730]: I0131 17:08:40.650882 4730 scope.go:117] "RemoveContainer" containerID="9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6" Jan 31 17:08:40 crc kubenswrapper[4730]: I0131 17:08:40.651014 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:08:40 crc kubenswrapper[4730]: E0131 17:08:40.651628 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:08:50 crc kubenswrapper[4730]: I0131 17:08:50.463854 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:08:50 crc kubenswrapper[4730]: I0131 17:08:50.464432 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:08:50 crc kubenswrapper[4730]: E0131 17:08:50.464662 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:08:50 crc kubenswrapper[4730]: I0131 17:08:50.533450 4730 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod61f4062a-9d13-4d85-bea4-1eebfc32260e"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod61f4062a-9d13-4d85-bea4-1eebfc32260e] : Timed out while waiting for systemd to remove kubepods-burstable-pod61f4062a_9d13_4d85_bea4_1eebfc32260e.slice" Jan 31 17:08:50 crc kubenswrapper[4730]: E0131 17:08:50.533507 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod61f4062a-9d13-4d85-bea4-1eebfc32260e] : unable to destroy cgroup paths for cgroup [kubepods burstable pod61f4062a-9d13-4d85-bea4-1eebfc32260e] : Timed out while waiting for systemd to remove kubepods-burstable-pod61f4062a_9d13_4d85_bea4_1eebfc32260e.slice" pod="openshift-marketplace/redhat-operators-nxfhh" podUID="61f4062a-9d13-4d85-bea4-1eebfc32260e" Jan 31 17:08:50 crc kubenswrapper[4730]: I0131 17:08:50.730355 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nxfhh" Jan 31 17:08:50 crc kubenswrapper[4730]: I0131 17:08:50.790047 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nxfhh"] Jan 31 17:08:50 crc kubenswrapper[4730]: I0131 17:08:50.816968 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nxfhh"] Jan 31 17:08:52 crc kubenswrapper[4730]: I0131 17:08:52.481556 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61f4062a-9d13-4d85-bea4-1eebfc32260e" path="/var/lib/kubelet/pods/61f4062a-9d13-4d85-bea4-1eebfc32260e/volumes" Jan 31 17:08:55 crc kubenswrapper[4730]: I0131 17:08:55.464397 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:08:55 crc kubenswrapper[4730]: I0131 17:08:55.465385 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:08:55 crc kubenswrapper[4730]: I0131 17:08:55.465506 4730 scope.go:117] "RemoveContainer" containerID="9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6" Jan 31 17:08:55 crc kubenswrapper[4730]: I0131 17:08:55.465657 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:08:55 crc kubenswrapper[4730]: E0131 17:08:55.466219 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:09:03 crc kubenswrapper[4730]: I0131 17:09:03.465171 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:09:03 crc kubenswrapper[4730]: I0131 17:09:03.465586 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:09:03 crc kubenswrapper[4730]: E0131 17:09:03.667437 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:09:03 crc kubenswrapper[4730]: I0131 17:09:03.846935 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9"} Jan 31 17:09:03 crc kubenswrapper[4730]: I0131 17:09:03.847771 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:09:03 crc kubenswrapper[4730]: I0131 17:09:03.847979 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:09:03 crc kubenswrapper[4730]: E0131 17:09:03.848173 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:09:04 crc kubenswrapper[4730]: I0131 17:09:04.857473 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" exitCode=1 Jan 31 17:09:04 crc kubenswrapper[4730]: I0131 17:09:04.858068 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9"} Jan 31 17:09:04 crc kubenswrapper[4730]: I0131 17:09:04.858100 4730 scope.go:117] "RemoveContainer" containerID="b6dc0bb59c8b3a61f6940574bf22d1b7358231f96f66599ccbfcb364b8c6706f" Jan 31 17:09:04 crc kubenswrapper[4730]: I0131 17:09:04.858868 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:09:04 crc kubenswrapper[4730]: I0131 17:09:04.858879 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:09:04 crc kubenswrapper[4730]: E0131 17:09:04.859270 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:09:05 crc kubenswrapper[4730]: I0131 17:09:05.866931 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:09:05 crc kubenswrapper[4730]: I0131 17:09:05.866955 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:09:05 crc kubenswrapper[4730]: E0131 17:09:05.867285 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:09:06 crc kubenswrapper[4730]: I0131 17:09:06.654150 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:09:06 crc kubenswrapper[4730]: I0131 17:09:06.876116 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:09:06 crc kubenswrapper[4730]: I0131 17:09:06.876154 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:09:06 crc kubenswrapper[4730]: E0131 17:09:06.876594 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:09:10 crc kubenswrapper[4730]: I0131 17:09:10.465614 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:09:10 crc kubenswrapper[4730]: I0131 17:09:10.466187 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:09:10 crc kubenswrapper[4730]: I0131 17:09:10.466211 4730 scope.go:117] "RemoveContainer" containerID="9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6" Jan 31 17:09:10 crc kubenswrapper[4730]: I0131 17:09:10.466270 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:09:10 crc kubenswrapper[4730]: E0131 17:09:10.690282 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:09:10 crc kubenswrapper[4730]: I0131 17:09:10.913857 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752"} Jan 31 17:09:10 crc kubenswrapper[4730]: I0131 17:09:10.914655 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:09:10 crc kubenswrapper[4730]: I0131 17:09:10.914710 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:09:10 crc kubenswrapper[4730]: I0131 17:09:10.914793 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:09:10 crc kubenswrapper[4730]: E0131 17:09:10.915087 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:09:20 crc kubenswrapper[4730]: I0131 17:09:20.464600 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:09:20 crc kubenswrapper[4730]: I0131 17:09:20.465264 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:09:20 crc kubenswrapper[4730]: E0131 17:09:20.465565 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:09:24 crc kubenswrapper[4730]: I0131 17:09:24.473372 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:09:24 crc kubenswrapper[4730]: I0131 17:09:24.475115 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:09:24 crc kubenswrapper[4730]: I0131 17:09:24.475327 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:09:24 crc kubenswrapper[4730]: E0131 17:09:24.475926 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:09:26 crc kubenswrapper[4730]: I0131 17:09:26.975337 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 17:09:26 crc kubenswrapper[4730]: I0131 17:09:26.975725 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 17:09:32 crc kubenswrapper[4730]: I0131 17:09:32.135123 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" exitCode=1 Jan 31 17:09:32 crc kubenswrapper[4730]: I0131 17:09:32.135226 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1"} Jan 31 17:09:32 crc kubenswrapper[4730]: I0131 17:09:32.136005 4730 scope.go:117] "RemoveContainer" containerID="51b4d274059cbb8c7d0d884e8b54596d92f9b84b40588691a660c9e00b88c601" Jan 31 17:09:32 crc kubenswrapper[4730]: I0131 17:09:32.137566 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:09:32 crc kubenswrapper[4730]: I0131 17:09:32.137648 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:09:32 crc kubenswrapper[4730]: I0131 17:09:32.138657 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:09:32 crc kubenswrapper[4730]: I0131 17:09:32.138964 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:09:32 crc kubenswrapper[4730]: E0131 17:09:32.142713 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:09:33 crc kubenswrapper[4730]: I0131 17:09:33.465338 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:09:33 crc kubenswrapper[4730]: I0131 17:09:33.466692 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:09:33 crc kubenswrapper[4730]: E0131 17:09:33.467296 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:09:35 crc kubenswrapper[4730]: I0131 17:09:35.442069 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:09:35 crc kubenswrapper[4730]: E0131 17:09:35.442299 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 17:09:35 crc kubenswrapper[4730]: E0131 17:09:35.442731 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 17:11:37.442698524 +0000 UTC m=+2484.248755470 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 17:09:45 crc kubenswrapper[4730]: E0131 17:09:45.874025 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 17:09:46 crc kubenswrapper[4730]: I0131 17:09:46.294428 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:09:47 crc kubenswrapper[4730]: I0131 17:09:47.464928 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:09:47 crc kubenswrapper[4730]: I0131 17:09:47.464956 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:09:47 crc kubenswrapper[4730]: E0131 17:09:47.465196 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:09:47 crc kubenswrapper[4730]: I0131 17:09:47.466097 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:09:47 crc kubenswrapper[4730]: I0131 17:09:47.466212 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:09:47 crc kubenswrapper[4730]: I0131 17:09:47.466354 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:09:47 crc kubenswrapper[4730]: I0131 17:09:47.466374 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:09:47 crc kubenswrapper[4730]: E0131 17:09:47.466904 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:09:56 crc kubenswrapper[4730]: I0131 17:09:56.976123 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 17:09:56 crc kubenswrapper[4730]: I0131 17:09:56.976863 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 17:09:59 crc kubenswrapper[4730]: I0131 17:09:59.465239 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:09:59 crc kubenswrapper[4730]: I0131 17:09:59.465734 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:09:59 crc kubenswrapper[4730]: I0131 17:09:59.465947 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:09:59 crc kubenswrapper[4730]: I0131 17:09:59.465963 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:09:59 crc kubenswrapper[4730]: E0131 17:09:59.466651 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:10:01 crc kubenswrapper[4730]: I0131 17:10:01.464659 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:10:01 crc kubenswrapper[4730]: I0131 17:10:01.465063 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:10:01 crc kubenswrapper[4730]: E0131 17:10:01.701541 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:10:02 crc kubenswrapper[4730]: I0131 17:10:02.485725 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"d65a5a0f60014f7873d8c3f6dcb0900e0aa25290eec92c8f2f8a6e2e12035fa0"} Jan 31 17:10:02 crc kubenswrapper[4730]: I0131 17:10:02.486445 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:10:02 crc kubenswrapper[4730]: I0131 17:10:02.487080 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:10:02 crc kubenswrapper[4730]: E0131 17:10:02.487748 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:10:03 crc kubenswrapper[4730]: I0131 17:10:03.475417 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:10:03 crc kubenswrapper[4730]: E0131 17:10:03.475786 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:10:06 crc kubenswrapper[4730]: I0131 17:10:06.661282 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:10:09 crc kubenswrapper[4730]: I0131 17:10:09.657714 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:10:10 crc kubenswrapper[4730]: I0131 17:10:10.660033 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:10:11 crc kubenswrapper[4730]: I0131 17:10:11.466197 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:10:11 crc kubenswrapper[4730]: I0131 17:10:11.466338 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:10:11 crc kubenswrapper[4730]: I0131 17:10:11.466530 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:10:11 crc kubenswrapper[4730]: I0131 17:10:11.466546 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:10:11 crc kubenswrapper[4730]: E0131 17:10:11.467145 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:10:12 crc kubenswrapper[4730]: I0131 17:10:12.660366 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:10:12 crc kubenswrapper[4730]: I0131 17:10:12.660941 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:10:12 crc kubenswrapper[4730]: I0131 17:10:12.662199 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"d65a5a0f60014f7873d8c3f6dcb0900e0aa25290eec92c8f2f8a6e2e12035fa0"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 17:10:12 crc kubenswrapper[4730]: I0131 17:10:12.662248 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:10:12 crc kubenswrapper[4730]: I0131 17:10:12.662295 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://d65a5a0f60014f7873d8c3f6dcb0900e0aa25290eec92c8f2f8a6e2e12035fa0" gracePeriod=30 Jan 31 17:10:12 crc kubenswrapper[4730]: I0131 17:10:12.671478 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.176:8080/healthcheck\": EOF" Jan 31 17:10:13 crc kubenswrapper[4730]: E0131 17:10:13.143760 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:10:13 crc kubenswrapper[4730]: I0131 17:10:13.573643 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="d65a5a0f60014f7873d8c3f6dcb0900e0aa25290eec92c8f2f8a6e2e12035fa0" exitCode=0 Jan 31 17:10:13 crc kubenswrapper[4730]: I0131 17:10:13.573681 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"d65a5a0f60014f7873d8c3f6dcb0900e0aa25290eec92c8f2f8a6e2e12035fa0"} Jan 31 17:10:13 crc kubenswrapper[4730]: I0131 17:10:13.573706 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f"} Jan 31 17:10:13 crc kubenswrapper[4730]: I0131 17:10:13.573722 4730 scope.go:117] "RemoveContainer" containerID="acb0ab58548bb4d90e20fbf1328be1dcba2730b1fa77ca34a9857298bd8dd10d" Jan 31 17:10:13 crc kubenswrapper[4730]: I0131 17:10:13.573967 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:10:13 crc kubenswrapper[4730]: I0131 17:10:13.574381 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:10:13 crc kubenswrapper[4730]: E0131 17:10:13.574567 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:10:14 crc kubenswrapper[4730]: I0131 17:10:14.589953 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:10:14 crc kubenswrapper[4730]: E0131 17:10:14.590327 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:10:18 crc kubenswrapper[4730]: I0131 17:10:18.660693 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:10:20 crc kubenswrapper[4730]: I0131 17:10:20.661715 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:10:21 crc kubenswrapper[4730]: I0131 17:10:21.657225 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:10:22 crc kubenswrapper[4730]: I0131 17:10:22.464946 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:10:22 crc kubenswrapper[4730]: I0131 17:10:22.465047 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:10:22 crc kubenswrapper[4730]: I0131 17:10:22.465154 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:10:22 crc kubenswrapper[4730]: I0131 17:10:22.465164 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:10:22 crc kubenswrapper[4730]: E0131 17:10:22.465598 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:10:24 crc kubenswrapper[4730]: I0131 17:10:24.663713 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:10:24 crc kubenswrapper[4730]: I0131 17:10:24.664178 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:10:24 crc kubenswrapper[4730]: I0131 17:10:24.665252 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 17:10:24 crc kubenswrapper[4730]: I0131 17:10:24.665291 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:10:24 crc kubenswrapper[4730]: I0131 17:10:24.665331 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" gracePeriod=30 Jan 31 17:10:24 crc kubenswrapper[4730]: I0131 17:10:24.668972 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:10:24 crc kubenswrapper[4730]: E0131 17:10:24.786484 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:10:25 crc kubenswrapper[4730]: I0131 17:10:25.654555 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.176:8080/healthcheck\": dial tcp 10.217.0.176:8080: connect: connection refused" Jan 31 17:10:25 crc kubenswrapper[4730]: I0131 17:10:25.702784 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" exitCode=0 Jan 31 17:10:25 crc kubenswrapper[4730]: I0131 17:10:25.702849 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f"} Jan 31 17:10:25 crc kubenswrapper[4730]: I0131 17:10:25.702915 4730 scope.go:117] "RemoveContainer" containerID="d65a5a0f60014f7873d8c3f6dcb0900e0aa25290eec92c8f2f8a6e2e12035fa0" Jan 31 17:10:25 crc kubenswrapper[4730]: I0131 17:10:25.704087 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:10:25 crc kubenswrapper[4730]: I0131 17:10:25.704145 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:10:25 crc kubenswrapper[4730]: E0131 17:10:25.704587 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:10:26 crc kubenswrapper[4730]: I0131 17:10:26.974477 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 17:10:26 crc kubenswrapper[4730]: I0131 17:10:26.974561 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 17:10:26 crc kubenswrapper[4730]: I0131 17:10:26.974628 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 17:10:26 crc kubenswrapper[4730]: I0131 17:10:26.975674 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf"} pod="openshift-machine-config-operator/machine-config-daemon-mzg47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 17:10:26 crc kubenswrapper[4730]: I0131 17:10:26.975785 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" containerID="cri-o://f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" gracePeriod=600 Jan 31 17:10:27 crc kubenswrapper[4730]: E0131 17:10:27.107542 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:10:27 crc kubenswrapper[4730]: I0131 17:10:27.731002 4730 generic.go:334] "Generic (PLEG): container finished" podID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" exitCode=0 Jan 31 17:10:27 crc kubenswrapper[4730]: I0131 17:10:27.731185 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerDied","Data":"f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf"} Jan 31 17:10:27 crc kubenswrapper[4730]: I0131 17:10:27.731352 4730 scope.go:117] "RemoveContainer" containerID="f8668f98817acfc5fd3cfd4762ca185e124bba2a71d4c129e398e40d29fa8b09" Jan 31 17:10:27 crc kubenswrapper[4730]: I0131 17:10:27.732064 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:10:27 crc kubenswrapper[4730]: E0131 17:10:27.732408 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:10:32 crc kubenswrapper[4730]: I0131 17:10:32.788564 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" exitCode=1 Jan 31 17:10:32 crc kubenswrapper[4730]: I0131 17:10:32.788690 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752"} Jan 31 17:10:32 crc kubenswrapper[4730]: I0131 17:10:32.789638 4730 scope.go:117] "RemoveContainer" containerID="9c1fadf07df3699388c13f80612fec058abd2daaac4a75790523ae9e8171fbe6" Jan 31 17:10:32 crc kubenswrapper[4730]: I0131 17:10:32.790917 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:10:32 crc kubenswrapper[4730]: I0131 17:10:32.791033 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:10:32 crc kubenswrapper[4730]: I0131 17:10:32.791076 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:10:32 crc kubenswrapper[4730]: I0131 17:10:32.791190 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:10:32 crc kubenswrapper[4730]: I0131 17:10:32.791203 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:10:32 crc kubenswrapper[4730]: E0131 17:10:32.791967 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:10:36 crc kubenswrapper[4730]: I0131 17:10:36.468944 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:10:36 crc kubenswrapper[4730]: I0131 17:10:36.472561 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:10:36 crc kubenswrapper[4730]: E0131 17:10:36.473083 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:10:40 crc kubenswrapper[4730]: I0131 17:10:40.464561 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:10:40 crc kubenswrapper[4730]: E0131 17:10:40.465528 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:10:44 crc kubenswrapper[4730]: I0131 17:10:44.482174 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:10:44 crc kubenswrapper[4730]: I0131 17:10:44.483038 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:10:44 crc kubenswrapper[4730]: I0131 17:10:44.483093 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:10:44 crc kubenswrapper[4730]: I0131 17:10:44.483488 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:10:44 crc kubenswrapper[4730]: I0131 17:10:44.483512 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:10:44 crc kubenswrapper[4730]: E0131 17:10:44.484554 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:10:47 crc kubenswrapper[4730]: I0131 17:10:47.464985 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:10:47 crc kubenswrapper[4730]: I0131 17:10:47.465442 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:10:47 crc kubenswrapper[4730]: E0131 17:10:47.466069 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:10:55 crc kubenswrapper[4730]: I0131 17:10:55.463759 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:10:55 crc kubenswrapper[4730]: I0131 17:10:55.464270 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:10:55 crc kubenswrapper[4730]: I0131 17:10:55.464291 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:10:55 crc kubenswrapper[4730]: I0131 17:10:55.464310 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:10:55 crc kubenswrapper[4730]: I0131 17:10:55.464336 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:10:55 crc kubenswrapper[4730]: I0131 17:10:55.464343 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:10:55 crc kubenswrapper[4730]: E0131 17:10:55.464578 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:10:55 crc kubenswrapper[4730]: E0131 17:10:55.464684 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:11:00 crc kubenswrapper[4730]: I0131 17:11:00.464480 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:11:00 crc kubenswrapper[4730]: I0131 17:11:00.464793 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:11:00 crc kubenswrapper[4730]: E0131 17:11:00.465339 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:11:09 crc kubenswrapper[4730]: I0131 17:11:09.465492 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:11:09 crc kubenswrapper[4730]: I0131 17:11:09.466153 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:11:09 crc kubenswrapper[4730]: I0131 17:11:09.466200 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:11:09 crc kubenswrapper[4730]: I0131 17:11:09.466300 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:11:09 crc kubenswrapper[4730]: I0131 17:11:09.466313 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:11:09 crc kubenswrapper[4730]: E0131 17:11:09.467008 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:11:10 crc kubenswrapper[4730]: I0131 17:11:10.464334 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:11:10 crc kubenswrapper[4730]: E0131 17:11:10.465014 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:11:11 crc kubenswrapper[4730]: I0131 17:11:11.464957 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:11:11 crc kubenswrapper[4730]: I0131 17:11:11.465002 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:11:11 crc kubenswrapper[4730]: E0131 17:11:11.465483 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:11:21 crc kubenswrapper[4730]: I0131 17:11:21.465083 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:11:21 crc kubenswrapper[4730]: I0131 17:11:21.465704 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:11:21 crc kubenswrapper[4730]: I0131 17:11:21.465730 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:11:21 crc kubenswrapper[4730]: I0131 17:11:21.465819 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:11:21 crc kubenswrapper[4730]: I0131 17:11:21.465828 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:11:21 crc kubenswrapper[4730]: E0131 17:11:21.466168 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:11:23 crc kubenswrapper[4730]: I0131 17:11:23.467008 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:11:23 crc kubenswrapper[4730]: E0131 17:11:23.467758 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:11:25 crc kubenswrapper[4730]: I0131 17:11:25.464489 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:11:25 crc kubenswrapper[4730]: I0131 17:11:25.464732 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:11:25 crc kubenswrapper[4730]: E0131 17:11:25.464977 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:11:33 crc kubenswrapper[4730]: I0131 17:11:33.465608 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:11:33 crc kubenswrapper[4730]: I0131 17:11:33.466283 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:11:33 crc kubenswrapper[4730]: I0131 17:11:33.466329 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:11:33 crc kubenswrapper[4730]: I0131 17:11:33.466423 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:11:33 crc kubenswrapper[4730]: I0131 17:11:33.466436 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:11:33 crc kubenswrapper[4730]: E0131 17:11:33.467201 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:11:37 crc kubenswrapper[4730]: I0131 17:11:37.463870 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:11:37 crc kubenswrapper[4730]: E0131 17:11:37.464830 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:11:37 crc kubenswrapper[4730]: I0131 17:11:37.485825 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:11:37 crc kubenswrapper[4730]: E0131 17:11:37.487574 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 17:11:37 crc kubenswrapper[4730]: E0131 17:11:37.487640 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 17:13:39.487622446 +0000 UTC m=+2606.293679372 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 17:11:40 crc kubenswrapper[4730]: I0131 17:11:40.464646 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:11:40 crc kubenswrapper[4730]: I0131 17:11:40.465921 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:11:40 crc kubenswrapper[4730]: E0131 17:11:40.466352 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:11:47 crc kubenswrapper[4730]: I0131 17:11:47.466164 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:11:47 crc kubenswrapper[4730]: I0131 17:11:47.467346 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:11:47 crc kubenswrapper[4730]: I0131 17:11:47.467400 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:11:47 crc kubenswrapper[4730]: I0131 17:11:47.467523 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:11:47 crc kubenswrapper[4730]: I0131 17:11:47.467542 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:11:47 crc kubenswrapper[4730]: E0131 17:11:47.468562 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:11:49 crc kubenswrapper[4730]: E0131 17:11:49.296198 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 17:11:49 crc kubenswrapper[4730]: I0131 17:11:49.593140 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:11:50 crc kubenswrapper[4730]: I0131 17:11:50.464436 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:11:50 crc kubenswrapper[4730]: E0131 17:11:50.464728 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:11:54 crc kubenswrapper[4730]: I0131 17:11:54.471394 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:11:54 crc kubenswrapper[4730]: I0131 17:11:54.472031 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:11:54 crc kubenswrapper[4730]: E0131 17:11:54.472461 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:12:00 crc kubenswrapper[4730]: I0131 17:12:00.465212 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:12:00 crc kubenswrapper[4730]: I0131 17:12:00.465813 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:12:00 crc kubenswrapper[4730]: I0131 17:12:00.465837 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:12:00 crc kubenswrapper[4730]: I0131 17:12:00.465886 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:12:00 crc kubenswrapper[4730]: I0131 17:12:00.465893 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:12:00 crc kubenswrapper[4730]: E0131 17:12:00.466225 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:12:05 crc kubenswrapper[4730]: I0131 17:12:05.465059 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:12:05 crc kubenswrapper[4730]: E0131 17:12:05.466361 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:12:09 crc kubenswrapper[4730]: I0131 17:12:09.464387 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:12:09 crc kubenswrapper[4730]: I0131 17:12:09.464918 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:12:09 crc kubenswrapper[4730]: E0131 17:12:09.465267 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:12:14 crc kubenswrapper[4730]: I0131 17:12:14.476066 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:12:14 crc kubenswrapper[4730]: I0131 17:12:14.476701 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:12:14 crc kubenswrapper[4730]: I0131 17:12:14.476745 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:12:14 crc kubenswrapper[4730]: I0131 17:12:14.476863 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:12:14 crc kubenswrapper[4730]: I0131 17:12:14.476881 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:12:14 crc kubenswrapper[4730]: E0131 17:12:14.477554 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:12:18 crc kubenswrapper[4730]: I0131 17:12:18.465943 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:12:18 crc kubenswrapper[4730]: E0131 17:12:18.467030 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:12:21 crc kubenswrapper[4730]: I0131 17:12:21.464832 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:12:21 crc kubenswrapper[4730]: I0131 17:12:21.465220 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:12:21 crc kubenswrapper[4730]: E0131 17:12:21.465670 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:12:28 crc kubenswrapper[4730]: I0131 17:12:28.468069 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:12:28 crc kubenswrapper[4730]: I0131 17:12:28.475372 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:12:28 crc kubenswrapper[4730]: I0131 17:12:28.475476 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:12:28 crc kubenswrapper[4730]: I0131 17:12:28.475585 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:12:28 crc kubenswrapper[4730]: I0131 17:12:28.475643 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:12:28 crc kubenswrapper[4730]: I0131 17:12:28.943986 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d"} Jan 31 17:12:28 crc kubenswrapper[4730]: I0131 17:12:28.944273 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54"} Jan 31 17:12:29 crc kubenswrapper[4730]: E0131 17:12:29.034394 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:12:29 crc kubenswrapper[4730]: I0131 17:12:29.465347 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:12:29 crc kubenswrapper[4730]: E0131 17:12:29.466421 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:12:29 crc kubenswrapper[4730]: I0131 17:12:29.965494 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" exitCode=1 Jan 31 17:12:29 crc kubenswrapper[4730]: I0131 17:12:29.965541 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" exitCode=1 Jan 31 17:12:29 crc kubenswrapper[4730]: I0131 17:12:29.965553 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" exitCode=1 Jan 31 17:12:29 crc kubenswrapper[4730]: I0131 17:12:29.965531 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d"} Jan 31 17:12:29 crc kubenswrapper[4730]: I0131 17:12:29.965597 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54"} Jan 31 17:12:29 crc kubenswrapper[4730]: I0131 17:12:29.965616 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2"} Jan 31 17:12:29 crc kubenswrapper[4730]: I0131 17:12:29.965638 4730 scope.go:117] "RemoveContainer" containerID="8d2acd41b7c8dd7210961aa0b3eca17d87fc5a33122812f71726111c20b0e48b" Jan 31 17:12:29 crc kubenswrapper[4730]: I0131 17:12:29.966582 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:12:29 crc kubenswrapper[4730]: I0131 17:12:29.966737 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:12:29 crc kubenswrapper[4730]: I0131 17:12:29.966779 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:12:29 crc kubenswrapper[4730]: I0131 17:12:29.966877 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:12:29 crc kubenswrapper[4730]: I0131 17:12:29.966891 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:12:29 crc kubenswrapper[4730]: E0131 17:12:29.968837 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:12:30 crc kubenswrapper[4730]: I0131 17:12:30.030578 4730 scope.go:117] "RemoveContainer" containerID="53a93ac189c43db9a68c332ace6268f544cb3a7f6cc99b161c99c14152d22958" Jan 31 17:12:30 crc kubenswrapper[4730]: I0131 17:12:30.092599 4730 scope.go:117] "RemoveContainer" containerID="f544e9fe5feccb6dcbb13a049689284db5292a304eb155d61030daa0a8c30bd5" Jan 31 17:12:30 crc kubenswrapper[4730]: I0131 17:12:30.986283 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:12:30 crc kubenswrapper[4730]: I0131 17:12:30.988111 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:12:30 crc kubenswrapper[4730]: I0131 17:12:30.988192 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:12:30 crc kubenswrapper[4730]: I0131 17:12:30.988307 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:12:30 crc kubenswrapper[4730]: I0131 17:12:30.988323 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:12:30 crc kubenswrapper[4730]: E0131 17:12:30.989337 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:12:33 crc kubenswrapper[4730]: I0131 17:12:33.467220 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:12:33 crc kubenswrapper[4730]: I0131 17:12:33.467558 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:12:33 crc kubenswrapper[4730]: E0131 17:12:33.468204 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:12:43 crc kubenswrapper[4730]: I0131 17:12:43.464543 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:12:43 crc kubenswrapper[4730]: E0131 17:12:43.465251 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:12:45 crc kubenswrapper[4730]: I0131 17:12:45.464836 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:12:45 crc kubenswrapper[4730]: I0131 17:12:45.464862 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:12:45 crc kubenswrapper[4730]: E0131 17:12:45.465054 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:12:46 crc kubenswrapper[4730]: I0131 17:12:46.464327 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:12:46 crc kubenswrapper[4730]: I0131 17:12:46.465038 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:12:46 crc kubenswrapper[4730]: I0131 17:12:46.465463 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:12:46 crc kubenswrapper[4730]: I0131 17:12:46.465614 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:12:46 crc kubenswrapper[4730]: I0131 17:12:46.465696 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:12:46 crc kubenswrapper[4730]: E0131 17:12:46.466316 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:12:55 crc kubenswrapper[4730]: I0131 17:12:55.464368 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:12:55 crc kubenswrapper[4730]: E0131 17:12:55.465360 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:12:59 crc kubenswrapper[4730]: I0131 17:12:59.465597 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:12:59 crc kubenswrapper[4730]: I0131 17:12:59.467169 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:12:59 crc kubenswrapper[4730]: I0131 17:12:59.467253 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:12:59 crc kubenswrapper[4730]: I0131 17:12:59.467363 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:12:59 crc kubenswrapper[4730]: I0131 17:12:59.467377 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:12:59 crc kubenswrapper[4730]: E0131 17:12:59.468688 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:13:00 crc kubenswrapper[4730]: I0131 17:13:00.464312 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:13:00 crc kubenswrapper[4730]: I0131 17:13:00.464873 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:13:00 crc kubenswrapper[4730]: E0131 17:13:00.465237 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:13:09 crc kubenswrapper[4730]: I0131 17:13:09.465892 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:13:09 crc kubenswrapper[4730]: E0131 17:13:09.466940 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:13:13 crc kubenswrapper[4730]: I0131 17:13:13.465353 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:13:13 crc kubenswrapper[4730]: I0131 17:13:13.466266 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:13:13 crc kubenswrapper[4730]: I0131 17:13:13.466333 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:13:13 crc kubenswrapper[4730]: I0131 17:13:13.466459 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:13:13 crc kubenswrapper[4730]: I0131 17:13:13.466479 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:13:13 crc kubenswrapper[4730]: E0131 17:13:13.467330 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:13:15 crc kubenswrapper[4730]: I0131 17:13:15.465532 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:13:15 crc kubenswrapper[4730]: I0131 17:13:15.465847 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:13:15 crc kubenswrapper[4730]: E0131 17:13:15.466454 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:13:24 crc kubenswrapper[4730]: I0131 17:13:24.466208 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:13:24 crc kubenswrapper[4730]: E0131 17:13:24.466980 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:13:27 crc kubenswrapper[4730]: I0131 17:13:27.465004 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:13:27 crc kubenswrapper[4730]: I0131 17:13:27.465284 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:13:27 crc kubenswrapper[4730]: I0131 17:13:27.465323 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:13:27 crc kubenswrapper[4730]: I0131 17:13:27.465368 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:13:27 crc kubenswrapper[4730]: I0131 17:13:27.465374 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:13:27 crc kubenswrapper[4730]: E0131 17:13:27.465667 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:13:29 crc kubenswrapper[4730]: I0131 17:13:29.464428 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:13:29 crc kubenswrapper[4730]: I0131 17:13:29.464757 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:13:29 crc kubenswrapper[4730]: E0131 17:13:29.465084 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:13:39 crc kubenswrapper[4730]: I0131 17:13:39.464207 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:13:39 crc kubenswrapper[4730]: E0131 17:13:39.465231 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:13:39 crc kubenswrapper[4730]: I0131 17:13:39.533204 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:13:39 crc kubenswrapper[4730]: E0131 17:13:39.533356 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 17:13:39 crc kubenswrapper[4730]: E0131 17:13:39.533434 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 17:15:41.533417615 +0000 UTC m=+2728.339474521 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 17:13:41 crc kubenswrapper[4730]: I0131 17:13:41.465625 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:13:41 crc kubenswrapper[4730]: I0131 17:13:41.466035 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:13:41 crc kubenswrapper[4730]: I0131 17:13:41.466062 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:13:41 crc kubenswrapper[4730]: I0131 17:13:41.466104 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:13:41 crc kubenswrapper[4730]: I0131 17:13:41.466139 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:13:41 crc kubenswrapper[4730]: I0131 17:13:41.466199 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:13:41 crc kubenswrapper[4730]: I0131 17:13:41.466209 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:13:41 crc kubenswrapper[4730]: E0131 17:13:41.466559 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:13:41 crc kubenswrapper[4730]: E0131 17:13:41.466888 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:13:52 crc kubenswrapper[4730]: E0131 17:13:52.594598 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 17:13:52 crc kubenswrapper[4730]: I0131 17:13:52.810218 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:13:54 crc kubenswrapper[4730]: I0131 17:13:54.474694 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:13:54 crc kubenswrapper[4730]: E0131 17:13:54.475745 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:13:55 crc kubenswrapper[4730]: I0131 17:13:55.464484 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:13:55 crc kubenswrapper[4730]: I0131 17:13:55.464528 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:13:55 crc kubenswrapper[4730]: I0131 17:13:55.464639 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:13:55 crc kubenswrapper[4730]: I0131 17:13:55.464700 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:13:55 crc kubenswrapper[4730]: I0131 17:13:55.464725 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:13:55 crc kubenswrapper[4730]: I0131 17:13:55.464772 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:13:55 crc kubenswrapper[4730]: I0131 17:13:55.464778 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:13:55 crc kubenswrapper[4730]: E0131 17:13:55.465132 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:13:55 crc kubenswrapper[4730]: E0131 17:13:55.465291 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:14:06 crc kubenswrapper[4730]: I0131 17:14:06.463782 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:14:06 crc kubenswrapper[4730]: I0131 17:14:06.464280 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:14:06 crc kubenswrapper[4730]: I0131 17:14:06.464703 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:14:06 crc kubenswrapper[4730]: I0131 17:14:06.464907 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:14:06 crc kubenswrapper[4730]: I0131 17:14:06.465001 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:14:06 crc kubenswrapper[4730]: I0131 17:14:06.465968 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:14:06 crc kubenswrapper[4730]: I0131 17:14:06.465980 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:14:06 crc kubenswrapper[4730]: E0131 17:14:06.467897 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:14:06 crc kubenswrapper[4730]: E0131 17:14:06.646265 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:14:06 crc kubenswrapper[4730]: I0131 17:14:06.958332 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe"} Jan 31 17:14:06 crc kubenswrapper[4730]: I0131 17:14:06.959113 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:14:06 crc kubenswrapper[4730]: I0131 17:14:06.959254 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:14:06 crc kubenswrapper[4730]: E0131 17:14:06.959373 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:14:07 crc kubenswrapper[4730]: I0131 17:14:07.465245 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:14:07 crc kubenswrapper[4730]: E0131 17:14:07.466382 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:14:07 crc kubenswrapper[4730]: I0131 17:14:07.972899 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" exitCode=1 Jan 31 17:14:07 crc kubenswrapper[4730]: I0131 17:14:07.972972 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe"} Jan 31 17:14:07 crc kubenswrapper[4730]: I0131 17:14:07.973061 4730 scope.go:117] "RemoveContainer" containerID="dff68af69ac15c912714ec388be1edff8d0f6c92de2c9f54eaa0f7b6d4cfccf9" Jan 31 17:14:07 crc kubenswrapper[4730]: I0131 17:14:07.973923 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:14:07 crc kubenswrapper[4730]: I0131 17:14:07.973959 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:14:07 crc kubenswrapper[4730]: E0131 17:14:07.974396 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:14:08 crc kubenswrapper[4730]: I0131 17:14:08.988327 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:14:08 crc kubenswrapper[4730]: I0131 17:14:08.988359 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:14:08 crc kubenswrapper[4730]: E0131 17:14:08.988776 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:14:09 crc kubenswrapper[4730]: I0131 17:14:09.653500 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:14:10 crc kubenswrapper[4730]: I0131 17:14:10.019531 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:14:10 crc kubenswrapper[4730]: I0131 17:14:10.019569 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:14:10 crc kubenswrapper[4730]: E0131 17:14:10.020136 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:14:18 crc kubenswrapper[4730]: I0131 17:14:18.465071 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:14:18 crc kubenswrapper[4730]: I0131 17:14:18.465709 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:14:18 crc kubenswrapper[4730]: I0131 17:14:18.465756 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:14:18 crc kubenswrapper[4730]: I0131 17:14:18.465876 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:14:18 crc kubenswrapper[4730]: I0131 17:14:18.465890 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:14:18 crc kubenswrapper[4730]: E0131 17:14:18.466495 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:14:21 crc kubenswrapper[4730]: I0131 17:14:21.464571 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:14:21 crc kubenswrapper[4730]: I0131 17:14:21.464871 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:14:21 crc kubenswrapper[4730]: E0131 17:14:21.465100 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:14:22 crc kubenswrapper[4730]: I0131 17:14:22.464278 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:14:22 crc kubenswrapper[4730]: E0131 17:14:22.464719 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:14:30 crc kubenswrapper[4730]: I0131 17:14:30.464995 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:14:30 crc kubenswrapper[4730]: I0131 17:14:30.465254 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:14:30 crc kubenswrapper[4730]: I0131 17:14:30.465276 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:14:30 crc kubenswrapper[4730]: I0131 17:14:30.465320 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:14:30 crc kubenswrapper[4730]: I0131 17:14:30.465325 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:14:30 crc kubenswrapper[4730]: E0131 17:14:30.465636 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:14:36 crc kubenswrapper[4730]: I0131 17:14:36.464395 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:14:36 crc kubenswrapper[4730]: E0131 17:14:36.466275 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:14:36 crc kubenswrapper[4730]: I0131 17:14:36.466582 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:14:36 crc kubenswrapper[4730]: I0131 17:14:36.466612 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:14:36 crc kubenswrapper[4730]: E0131 17:14:36.467051 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:14:45 crc kubenswrapper[4730]: I0131 17:14:45.464553 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:14:45 crc kubenswrapper[4730]: I0131 17:14:45.466065 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:14:45 crc kubenswrapper[4730]: I0131 17:14:45.466146 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:14:45 crc kubenswrapper[4730]: I0131 17:14:45.466245 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:14:45 crc kubenswrapper[4730]: I0131 17:14:45.466300 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:14:45 crc kubenswrapper[4730]: E0131 17:14:45.690703 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:14:46 crc kubenswrapper[4730]: I0131 17:14:46.340949 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457"} Jan 31 17:14:46 crc kubenswrapper[4730]: I0131 17:14:46.342169 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:14:46 crc kubenswrapper[4730]: I0131 17:14:46.342249 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:14:46 crc kubenswrapper[4730]: I0131 17:14:46.342279 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:14:46 crc kubenswrapper[4730]: I0131 17:14:46.342355 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:14:46 crc kubenswrapper[4730]: E0131 17:14:46.342815 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:14:47 crc kubenswrapper[4730]: I0131 17:14:47.464405 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:14:47 crc kubenswrapper[4730]: I0131 17:14:47.464623 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:14:47 crc kubenswrapper[4730]: E0131 17:14:47.464901 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:14:48 crc kubenswrapper[4730]: I0131 17:14:48.466131 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:14:48 crc kubenswrapper[4730]: E0131 17:14:48.466383 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:14:59 crc kubenswrapper[4730]: I0131 17:14:59.466579 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:14:59 crc kubenswrapper[4730]: I0131 17:14:59.467445 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:14:59 crc kubenswrapper[4730]: I0131 17:14:59.467468 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:14:59 crc kubenswrapper[4730]: I0131 17:14:59.467529 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:14:59 crc kubenswrapper[4730]: E0131 17:14:59.469600 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.167258 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg"] Jan 31 17:15:00 crc kubenswrapper[4730]: E0131 17:15:00.167702 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3b5a8a-afa9-4c03-a74b-7b53185ff829" containerName="registry-server" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.167723 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3b5a8a-afa9-4c03-a74b-7b53185ff829" containerName="registry-server" Jan 31 17:15:00 crc kubenswrapper[4730]: E0131 17:15:00.167735 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f4062a-9d13-4d85-bea4-1eebfc32260e" containerName="registry-server" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.167743 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f4062a-9d13-4d85-bea4-1eebfc32260e" containerName="registry-server" Jan 31 17:15:00 crc kubenswrapper[4730]: E0131 17:15:00.167762 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f4062a-9d13-4d85-bea4-1eebfc32260e" containerName="extract-content" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.167771 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f4062a-9d13-4d85-bea4-1eebfc32260e" containerName="extract-content" Jan 31 17:15:00 crc kubenswrapper[4730]: E0131 17:15:00.167783 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3b5a8a-afa9-4c03-a74b-7b53185ff829" containerName="extract-content" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.167792 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3b5a8a-afa9-4c03-a74b-7b53185ff829" containerName="extract-content" Jan 31 17:15:00 crc kubenswrapper[4730]: E0131 17:15:00.167832 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f4062a-9d13-4d85-bea4-1eebfc32260e" containerName="extract-utilities" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.167841 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f4062a-9d13-4d85-bea4-1eebfc32260e" containerName="extract-utilities" Jan 31 17:15:00 crc kubenswrapper[4730]: E0131 17:15:00.167860 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3b5a8a-afa9-4c03-a74b-7b53185ff829" containerName="extract-utilities" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.167868 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3b5a8a-afa9-4c03-a74b-7b53185ff829" containerName="extract-utilities" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.168095 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f4062a-9d13-4d85-bea4-1eebfc32260e" containerName="registry-server" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.168126 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3b5a8a-afa9-4c03-a74b-7b53185ff829" containerName="registry-server" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.168945 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.176852 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.177230 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.186461 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg"] Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.255098 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2ae2d68-678a-4a9e-8337-5ccc24d274df-config-volume\") pod \"collect-profiles-29497995-zrpmg\" (UID: \"f2ae2d68-678a-4a9e-8337-5ccc24d274df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.255177 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2ae2d68-678a-4a9e-8337-5ccc24d274df-secret-volume\") pod \"collect-profiles-29497995-zrpmg\" (UID: \"f2ae2d68-678a-4a9e-8337-5ccc24d274df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.255225 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz5j8\" (UniqueName: \"kubernetes.io/projected/f2ae2d68-678a-4a9e-8337-5ccc24d274df-kube-api-access-cz5j8\") pod \"collect-profiles-29497995-zrpmg\" (UID: \"f2ae2d68-678a-4a9e-8337-5ccc24d274df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.357458 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2ae2d68-678a-4a9e-8337-5ccc24d274df-config-volume\") pod \"collect-profiles-29497995-zrpmg\" (UID: \"f2ae2d68-678a-4a9e-8337-5ccc24d274df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.357597 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2ae2d68-678a-4a9e-8337-5ccc24d274df-secret-volume\") pod \"collect-profiles-29497995-zrpmg\" (UID: \"f2ae2d68-678a-4a9e-8337-5ccc24d274df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.357667 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz5j8\" (UniqueName: \"kubernetes.io/projected/f2ae2d68-678a-4a9e-8337-5ccc24d274df-kube-api-access-cz5j8\") pod \"collect-profiles-29497995-zrpmg\" (UID: \"f2ae2d68-678a-4a9e-8337-5ccc24d274df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.360225 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2ae2d68-678a-4a9e-8337-5ccc24d274df-config-volume\") pod \"collect-profiles-29497995-zrpmg\" (UID: \"f2ae2d68-678a-4a9e-8337-5ccc24d274df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.364274 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2ae2d68-678a-4a9e-8337-5ccc24d274df-secret-volume\") pod \"collect-profiles-29497995-zrpmg\" (UID: \"f2ae2d68-678a-4a9e-8337-5ccc24d274df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.382708 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz5j8\" (UniqueName: \"kubernetes.io/projected/f2ae2d68-678a-4a9e-8337-5ccc24d274df-kube-api-access-cz5j8\") pod \"collect-profiles-29497995-zrpmg\" (UID: \"f2ae2d68-678a-4a9e-8337-5ccc24d274df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg" Jan 31 17:15:00 crc kubenswrapper[4730]: I0131 17:15:00.545867 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg" Jan 31 17:15:01 crc kubenswrapper[4730]: I0131 17:15:01.050300 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg"] Jan 31 17:15:01 crc kubenswrapper[4730]: I0131 17:15:01.486883 4730 generic.go:334] "Generic (PLEG): container finished" podID="f2ae2d68-678a-4a9e-8337-5ccc24d274df" containerID="d388ed1557dcc74c384cda910de830ebdf3c6ec5abef271c7f123a58bf076b22" exitCode=0 Jan 31 17:15:01 crc kubenswrapper[4730]: I0131 17:15:01.486925 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg" event={"ID":"f2ae2d68-678a-4a9e-8337-5ccc24d274df","Type":"ContainerDied","Data":"d388ed1557dcc74c384cda910de830ebdf3c6ec5abef271c7f123a58bf076b22"} Jan 31 17:15:01 crc kubenswrapper[4730]: I0131 17:15:01.486948 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg" event={"ID":"f2ae2d68-678a-4a9e-8337-5ccc24d274df","Type":"ContainerStarted","Data":"e5728573bef31ad8686a37639561b0eac0c45476ee82a0451326618433ef4fc9"} Jan 31 17:15:02 crc kubenswrapper[4730]: I0131 17:15:02.463981 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:15:02 crc kubenswrapper[4730]: I0131 17:15:02.465065 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:15:02 crc kubenswrapper[4730]: I0131 17:15:02.465263 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:15:02 crc kubenswrapper[4730]: E0131 17:15:02.465419 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:15:02 crc kubenswrapper[4730]: E0131 17:15:02.465639 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:15:02 crc kubenswrapper[4730]: I0131 17:15:02.843915 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg" Jan 31 17:15:02 crc kubenswrapper[4730]: I0131 17:15:02.903482 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz5j8\" (UniqueName: \"kubernetes.io/projected/f2ae2d68-678a-4a9e-8337-5ccc24d274df-kube-api-access-cz5j8\") pod \"f2ae2d68-678a-4a9e-8337-5ccc24d274df\" (UID: \"f2ae2d68-678a-4a9e-8337-5ccc24d274df\") " Jan 31 17:15:02 crc kubenswrapper[4730]: I0131 17:15:02.904367 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2ae2d68-678a-4a9e-8337-5ccc24d274df-config-volume" (OuterVolumeSpecName: "config-volume") pod "f2ae2d68-678a-4a9e-8337-5ccc24d274df" (UID: "f2ae2d68-678a-4a9e-8337-5ccc24d274df"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 17:15:02 crc kubenswrapper[4730]: I0131 17:15:02.903795 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2ae2d68-678a-4a9e-8337-5ccc24d274df-config-volume\") pod \"f2ae2d68-678a-4a9e-8337-5ccc24d274df\" (UID: \"f2ae2d68-678a-4a9e-8337-5ccc24d274df\") " Jan 31 17:15:02 crc kubenswrapper[4730]: I0131 17:15:02.904477 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2ae2d68-678a-4a9e-8337-5ccc24d274df-secret-volume\") pod \"f2ae2d68-678a-4a9e-8337-5ccc24d274df\" (UID: \"f2ae2d68-678a-4a9e-8337-5ccc24d274df\") " Jan 31 17:15:02 crc kubenswrapper[4730]: I0131 17:15:02.905321 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2ae2d68-678a-4a9e-8337-5ccc24d274df-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 17:15:02 crc kubenswrapper[4730]: I0131 17:15:02.910563 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2ae2d68-678a-4a9e-8337-5ccc24d274df-kube-api-access-cz5j8" (OuterVolumeSpecName: "kube-api-access-cz5j8") pod "f2ae2d68-678a-4a9e-8337-5ccc24d274df" (UID: "f2ae2d68-678a-4a9e-8337-5ccc24d274df"). InnerVolumeSpecName "kube-api-access-cz5j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 17:15:02 crc kubenswrapper[4730]: I0131 17:15:02.910999 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2ae2d68-678a-4a9e-8337-5ccc24d274df-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f2ae2d68-678a-4a9e-8337-5ccc24d274df" (UID: "f2ae2d68-678a-4a9e-8337-5ccc24d274df"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 17:15:03 crc kubenswrapper[4730]: I0131 17:15:03.006324 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz5j8\" (UniqueName: \"kubernetes.io/projected/f2ae2d68-678a-4a9e-8337-5ccc24d274df-kube-api-access-cz5j8\") on node \"crc\" DevicePath \"\"" Jan 31 17:15:03 crc kubenswrapper[4730]: I0131 17:15:03.006351 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2ae2d68-678a-4a9e-8337-5ccc24d274df-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 17:15:03 crc kubenswrapper[4730]: I0131 17:15:03.511403 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg" event={"ID":"f2ae2d68-678a-4a9e-8337-5ccc24d274df","Type":"ContainerDied","Data":"e5728573bef31ad8686a37639561b0eac0c45476ee82a0451326618433ef4fc9"} Jan 31 17:15:03 crc kubenswrapper[4730]: I0131 17:15:03.511480 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5728573bef31ad8686a37639561b0eac0c45476ee82a0451326618433ef4fc9" Jan 31 17:15:03 crc kubenswrapper[4730]: I0131 17:15:03.511597 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497995-zrpmg" Jan 31 17:15:03 crc kubenswrapper[4730]: I0131 17:15:03.947188 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl"] Jan 31 17:15:03 crc kubenswrapper[4730]: I0131 17:15:03.957670 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497950-c6ftl"] Jan 31 17:15:04 crc kubenswrapper[4730]: I0131 17:15:04.478232 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b61a61bd-3aaa-42b6-9681-2945b18462c2" path="/var/lib/kubelet/pods/b61a61bd-3aaa-42b6-9681-2945b18462c2/volumes" Jan 31 17:15:11 crc kubenswrapper[4730]: I0131 17:15:11.465234 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:15:11 crc kubenswrapper[4730]: I0131 17:15:11.465694 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:15:11 crc kubenswrapper[4730]: I0131 17:15:11.465741 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:15:11 crc kubenswrapper[4730]: I0131 17:15:11.465900 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:15:11 crc kubenswrapper[4730]: E0131 17:15:11.466761 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:15:13 crc kubenswrapper[4730]: I0131 17:15:13.467225 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:15:13 crc kubenswrapper[4730]: I0131 17:15:13.467545 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:15:13 crc kubenswrapper[4730]: E0131 17:15:13.468110 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:15:16 crc kubenswrapper[4730]: I0131 17:15:16.464360 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:15:16 crc kubenswrapper[4730]: E0131 17:15:16.464876 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:15:22 crc kubenswrapper[4730]: I0131 17:15:22.466779 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:15:22 crc kubenswrapper[4730]: I0131 17:15:22.469074 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:15:22 crc kubenswrapper[4730]: I0131 17:15:22.469107 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:15:22 crc kubenswrapper[4730]: I0131 17:15:22.469192 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:15:22 crc kubenswrapper[4730]: E0131 17:15:22.469711 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:15:24 crc kubenswrapper[4730]: I0131 17:15:24.947336 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rhb8m/must-gather-wdrtz"] Jan 31 17:15:24 crc kubenswrapper[4730]: E0131 17:15:24.948399 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ae2d68-678a-4a9e-8337-5ccc24d274df" containerName="collect-profiles" Jan 31 17:15:24 crc kubenswrapper[4730]: I0131 17:15:24.948416 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ae2d68-678a-4a9e-8337-5ccc24d274df" containerName="collect-profiles" Jan 31 17:15:24 crc kubenswrapper[4730]: I0131 17:15:24.948656 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2ae2d68-678a-4a9e-8337-5ccc24d274df" containerName="collect-profiles" Jan 31 17:15:24 crc kubenswrapper[4730]: I0131 17:15:24.949998 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rhb8m/must-gather-wdrtz" Jan 31 17:15:24 crc kubenswrapper[4730]: I0131 17:15:24.952178 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rhb8m"/"kube-root-ca.crt" Jan 31 17:15:24 crc kubenswrapper[4730]: I0131 17:15:24.953286 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rhb8m"/"openshift-service-ca.crt" Jan 31 17:15:24 crc kubenswrapper[4730]: I0131 17:15:24.980171 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rhb8m/must-gather-wdrtz"] Jan 31 17:15:25 crc kubenswrapper[4730]: I0131 17:15:25.081649 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xnm7\" (UniqueName: \"kubernetes.io/projected/56d88f94-8bbf-4f46-883d-7d370f7b7e33-kube-api-access-4xnm7\") pod \"must-gather-wdrtz\" (UID: \"56d88f94-8bbf-4f46-883d-7d370f7b7e33\") " pod="openshift-must-gather-rhb8m/must-gather-wdrtz" Jan 31 17:15:25 crc kubenswrapper[4730]: I0131 17:15:25.082060 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56d88f94-8bbf-4f46-883d-7d370f7b7e33-must-gather-output\") pod \"must-gather-wdrtz\" (UID: \"56d88f94-8bbf-4f46-883d-7d370f7b7e33\") " pod="openshift-must-gather-rhb8m/must-gather-wdrtz" Jan 31 17:15:25 crc kubenswrapper[4730]: I0131 17:15:25.183347 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56d88f94-8bbf-4f46-883d-7d370f7b7e33-must-gather-output\") pod \"must-gather-wdrtz\" (UID: \"56d88f94-8bbf-4f46-883d-7d370f7b7e33\") " pod="openshift-must-gather-rhb8m/must-gather-wdrtz" Jan 31 17:15:25 crc kubenswrapper[4730]: I0131 17:15:25.183750 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xnm7\" (UniqueName: \"kubernetes.io/projected/56d88f94-8bbf-4f46-883d-7d370f7b7e33-kube-api-access-4xnm7\") pod \"must-gather-wdrtz\" (UID: \"56d88f94-8bbf-4f46-883d-7d370f7b7e33\") " pod="openshift-must-gather-rhb8m/must-gather-wdrtz" Jan 31 17:15:25 crc kubenswrapper[4730]: I0131 17:15:25.184610 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56d88f94-8bbf-4f46-883d-7d370f7b7e33-must-gather-output\") pod \"must-gather-wdrtz\" (UID: \"56d88f94-8bbf-4f46-883d-7d370f7b7e33\") " pod="openshift-must-gather-rhb8m/must-gather-wdrtz" Jan 31 17:15:25 crc kubenswrapper[4730]: I0131 17:15:25.205999 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xnm7\" (UniqueName: \"kubernetes.io/projected/56d88f94-8bbf-4f46-883d-7d370f7b7e33-kube-api-access-4xnm7\") pod \"must-gather-wdrtz\" (UID: \"56d88f94-8bbf-4f46-883d-7d370f7b7e33\") " pod="openshift-must-gather-rhb8m/must-gather-wdrtz" Jan 31 17:15:25 crc kubenswrapper[4730]: I0131 17:15:25.266560 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rhb8m/must-gather-wdrtz" Jan 31 17:15:25 crc kubenswrapper[4730]: I0131 17:15:25.660104 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rhb8m/must-gather-wdrtz"] Jan 31 17:15:25 crc kubenswrapper[4730]: W0131 17:15:25.668051 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56d88f94_8bbf_4f46_883d_7d370f7b7e33.slice/crio-356b26f1bcf489b2ccc2e5c339be7f6667943a08f953ca6f46ae611124b969b6 WatchSource:0}: Error finding container 356b26f1bcf489b2ccc2e5c339be7f6667943a08f953ca6f46ae611124b969b6: Status 404 returned error can't find the container with id 356b26f1bcf489b2ccc2e5c339be7f6667943a08f953ca6f46ae611124b969b6 Jan 31 17:15:25 crc kubenswrapper[4730]: I0131 17:15:25.670573 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 17:15:25 crc kubenswrapper[4730]: I0131 17:15:25.742289 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rhb8m/must-gather-wdrtz" event={"ID":"56d88f94-8bbf-4f46-883d-7d370f7b7e33","Type":"ContainerStarted","Data":"356b26f1bcf489b2ccc2e5c339be7f6667943a08f953ca6f46ae611124b969b6"} Jan 31 17:15:26 crc kubenswrapper[4730]: I0131 17:15:26.465149 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:15:26 crc kubenswrapper[4730]: I0131 17:15:26.465586 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:15:26 crc kubenswrapper[4730]: E0131 17:15:26.686173 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:15:26 crc kubenswrapper[4730]: I0131 17:15:26.753902 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"692448793cb3875179e40be06590edd3c725a1b53a309f2ecb9a14d90108956b"} Jan 31 17:15:26 crc kubenswrapper[4730]: I0131 17:15:26.754451 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:15:26 crc kubenswrapper[4730]: E0131 17:15:26.754626 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:15:26 crc kubenswrapper[4730]: I0131 17:15:26.754751 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:15:27 crc kubenswrapper[4730]: I0131 17:15:27.760532 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:15:27 crc kubenswrapper[4730]: E0131 17:15:27.761009 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:15:29 crc kubenswrapper[4730]: I0131 17:15:29.464252 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:15:32 crc kubenswrapper[4730]: I0131 17:15:32.808470 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerStarted","Data":"8f0b779e1030f9cbd3ff463a2fefa2b4f4a055fd00a384af88e6f8249382c9c3"} Jan 31 17:15:32 crc kubenswrapper[4730]: I0131 17:15:32.816451 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rhb8m/must-gather-wdrtz" event={"ID":"56d88f94-8bbf-4f46-883d-7d370f7b7e33","Type":"ContainerStarted","Data":"c6f969ee869575d0a7ec8770d6682e7f0ceef84fe2d202282918812c3d3435f0"} Jan 31 17:15:33 crc kubenswrapper[4730]: I0131 17:15:33.670465 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:15:33 crc kubenswrapper[4730]: I0131 17:15:33.824914 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rhb8m/must-gather-wdrtz" event={"ID":"56d88f94-8bbf-4f46-883d-7d370f7b7e33","Type":"ContainerStarted","Data":"ee6da4e03bfaaf100a360969f6dcb54cbff7e71c20916a9dabf9d6a159b39a50"} Jan 31 17:15:33 crc kubenswrapper[4730]: I0131 17:15:33.841254 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rhb8m/must-gather-wdrtz" podStartSLOduration=3.102736654 podStartE2EDuration="9.841239771s" podCreationTimestamp="2026-01-31 17:15:24 +0000 UTC" firstStartedPulling="2026-01-31 17:15:25.670540354 +0000 UTC m=+2712.476597270" lastFinishedPulling="2026-01-31 17:15:32.409043471 +0000 UTC m=+2719.215100387" observedRunningTime="2026-01-31 17:15:33.837226679 +0000 UTC m=+2720.643283595" watchObservedRunningTime="2026-01-31 17:15:33.841239771 +0000 UTC m=+2720.647296687" Jan 31 17:15:35 crc kubenswrapper[4730]: I0131 17:15:35.663190 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:15:36 crc kubenswrapper[4730]: I0131 17:15:36.465464 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:15:36 crc kubenswrapper[4730]: I0131 17:15:36.465800 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:15:36 crc kubenswrapper[4730]: I0131 17:15:36.465850 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:15:36 crc kubenswrapper[4730]: I0131 17:15:36.465942 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:15:36 crc kubenswrapper[4730]: I0131 17:15:36.660846 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:15:36 crc kubenswrapper[4730]: E0131 17:15:36.702784 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:15:36 crc kubenswrapper[4730]: I0131 17:15:36.850378 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95"} Jan 31 17:15:36 crc kubenswrapper[4730]: I0131 17:15:36.851129 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:15:36 crc kubenswrapper[4730]: I0131 17:15:36.851186 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:15:36 crc kubenswrapper[4730]: I0131 17:15:36.851272 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:15:36 crc kubenswrapper[4730]: E0131 17:15:36.851605 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:15:37 crc kubenswrapper[4730]: I0131 17:15:37.183573 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mvf5d"] Jan 31 17:15:37 crc kubenswrapper[4730]: I0131 17:15:37.201655 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mvf5d"] Jan 31 17:15:37 crc kubenswrapper[4730]: I0131 17:15:37.201766 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvf5d" Jan 31 17:15:37 crc kubenswrapper[4730]: I0131 17:15:37.249932 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/003f8807-75a6-44d8-a9e0-5e4ec301af9c-utilities\") pod \"certified-operators-mvf5d\" (UID: \"003f8807-75a6-44d8-a9e0-5e4ec301af9c\") " pod="openshift-marketplace/certified-operators-mvf5d" Jan 31 17:15:37 crc kubenswrapper[4730]: I0131 17:15:37.250000 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6zfj\" (UniqueName: \"kubernetes.io/projected/003f8807-75a6-44d8-a9e0-5e4ec301af9c-kube-api-access-q6zfj\") pod \"certified-operators-mvf5d\" (UID: \"003f8807-75a6-44d8-a9e0-5e4ec301af9c\") " pod="openshift-marketplace/certified-operators-mvf5d" Jan 31 17:15:37 crc kubenswrapper[4730]: I0131 17:15:37.250052 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/003f8807-75a6-44d8-a9e0-5e4ec301af9c-catalog-content\") pod \"certified-operators-mvf5d\" (UID: \"003f8807-75a6-44d8-a9e0-5e4ec301af9c\") " pod="openshift-marketplace/certified-operators-mvf5d" Jan 31 17:15:37 crc kubenswrapper[4730]: I0131 17:15:37.351635 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/003f8807-75a6-44d8-a9e0-5e4ec301af9c-utilities\") pod \"certified-operators-mvf5d\" (UID: \"003f8807-75a6-44d8-a9e0-5e4ec301af9c\") " pod="openshift-marketplace/certified-operators-mvf5d" Jan 31 17:15:37 crc kubenswrapper[4730]: I0131 17:15:37.352380 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6zfj\" (UniqueName: \"kubernetes.io/projected/003f8807-75a6-44d8-a9e0-5e4ec301af9c-kube-api-access-q6zfj\") pod \"certified-operators-mvf5d\" (UID: \"003f8807-75a6-44d8-a9e0-5e4ec301af9c\") " pod="openshift-marketplace/certified-operators-mvf5d" Jan 31 17:15:37 crc kubenswrapper[4730]: I0131 17:15:37.352433 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/003f8807-75a6-44d8-a9e0-5e4ec301af9c-catalog-content\") pod \"certified-operators-mvf5d\" (UID: \"003f8807-75a6-44d8-a9e0-5e4ec301af9c\") " pod="openshift-marketplace/certified-operators-mvf5d" Jan 31 17:15:37 crc kubenswrapper[4730]: I0131 17:15:37.352668 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/003f8807-75a6-44d8-a9e0-5e4ec301af9c-catalog-content\") pod \"certified-operators-mvf5d\" (UID: \"003f8807-75a6-44d8-a9e0-5e4ec301af9c\") " pod="openshift-marketplace/certified-operators-mvf5d" Jan 31 17:15:37 crc kubenswrapper[4730]: I0131 17:15:37.352303 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/003f8807-75a6-44d8-a9e0-5e4ec301af9c-utilities\") pod \"certified-operators-mvf5d\" (UID: \"003f8807-75a6-44d8-a9e0-5e4ec301af9c\") " pod="openshift-marketplace/certified-operators-mvf5d" Jan 31 17:15:37 crc kubenswrapper[4730]: I0131 17:15:37.380092 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6zfj\" (UniqueName: \"kubernetes.io/projected/003f8807-75a6-44d8-a9e0-5e4ec301af9c-kube-api-access-q6zfj\") pod \"certified-operators-mvf5d\" (UID: \"003f8807-75a6-44d8-a9e0-5e4ec301af9c\") " pod="openshift-marketplace/certified-operators-mvf5d" Jan 31 17:15:37 crc kubenswrapper[4730]: I0131 17:15:37.546192 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvf5d" Jan 31 17:15:38 crc kubenswrapper[4730]: I0131 17:15:38.197436 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mvf5d"] Jan 31 17:15:38 crc kubenswrapper[4730]: I0131 17:15:38.867226 4730 generic.go:334] "Generic (PLEG): container finished" podID="003f8807-75a6-44d8-a9e0-5e4ec301af9c" containerID="21ee4ccc4f36eb261c39a1b3535914b8b33335a73dc228637f9ce818c02e39b6" exitCode=0 Jan 31 17:15:38 crc kubenswrapper[4730]: I0131 17:15:38.867311 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvf5d" event={"ID":"003f8807-75a6-44d8-a9e0-5e4ec301af9c","Type":"ContainerDied","Data":"21ee4ccc4f36eb261c39a1b3535914b8b33335a73dc228637f9ce818c02e39b6"} Jan 31 17:15:38 crc kubenswrapper[4730]: I0131 17:15:38.867771 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvf5d" event={"ID":"003f8807-75a6-44d8-a9e0-5e4ec301af9c","Type":"ContainerStarted","Data":"dcab494daa2253a0c986f22c9fcc472236f6a8da40d0bf768d7b7877fe87ee9a"} Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.234264 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rhb8m/crc-debug-r6hmh"] Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.235569 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rhb8m/crc-debug-r6hmh" Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.237368 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rhb8m"/"default-dockercfg-srr4v" Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.312004 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hncs7\" (UniqueName: \"kubernetes.io/projected/32f1c59b-121a-498a-80fe-b71cdb290908-kube-api-access-hncs7\") pod \"crc-debug-r6hmh\" (UID: \"32f1c59b-121a-498a-80fe-b71cdb290908\") " pod="openshift-must-gather-rhb8m/crc-debug-r6hmh" Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.312124 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32f1c59b-121a-498a-80fe-b71cdb290908-host\") pod \"crc-debug-r6hmh\" (UID: \"32f1c59b-121a-498a-80fe-b71cdb290908\") " pod="openshift-must-gather-rhb8m/crc-debug-r6hmh" Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.414792 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hncs7\" (UniqueName: \"kubernetes.io/projected/32f1c59b-121a-498a-80fe-b71cdb290908-kube-api-access-hncs7\") pod \"crc-debug-r6hmh\" (UID: \"32f1c59b-121a-498a-80fe-b71cdb290908\") " pod="openshift-must-gather-rhb8m/crc-debug-r6hmh" Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.414926 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32f1c59b-121a-498a-80fe-b71cdb290908-host\") pod \"crc-debug-r6hmh\" (UID: \"32f1c59b-121a-498a-80fe-b71cdb290908\") " pod="openshift-must-gather-rhb8m/crc-debug-r6hmh" Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.415074 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32f1c59b-121a-498a-80fe-b71cdb290908-host\") pod \"crc-debug-r6hmh\" (UID: \"32f1c59b-121a-498a-80fe-b71cdb290908\") " pod="openshift-must-gather-rhb8m/crc-debug-r6hmh" Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.447743 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hncs7\" (UniqueName: \"kubernetes.io/projected/32f1c59b-121a-498a-80fe-b71cdb290908-kube-api-access-hncs7\") pod \"crc-debug-r6hmh\" (UID: \"32f1c59b-121a-498a-80fe-b71cdb290908\") " pod="openshift-must-gather-rhb8m/crc-debug-r6hmh" Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.465646 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:15:39 crc kubenswrapper[4730]: E0131 17:15:39.465877 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.472534 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.551471 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rhb8m/crc-debug-r6hmh" Jan 31 17:15:39 crc kubenswrapper[4730]: W0131 17:15:39.586960 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32f1c59b_121a_498a_80fe_b71cdb290908.slice/crio-e5927f86de8ebd11b1c23333748966d94e07bf16ebea27a1f66f408e2854cf50 WatchSource:0}: Error finding container e5927f86de8ebd11b1c23333748966d94e07bf16ebea27a1f66f408e2854cf50: Status 404 returned error can't find the container with id e5927f86de8ebd11b1c23333748966d94e07bf16ebea27a1f66f408e2854cf50 Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.660114 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.660184 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.876608 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rhb8m/crc-debug-r6hmh" event={"ID":"32f1c59b-121a-498a-80fe-b71cdb290908","Type":"ContainerStarted","Data":"e5927f86de8ebd11b1c23333748966d94e07bf16ebea27a1f66f408e2854cf50"} Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.879244 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"692448793cb3875179e40be06590edd3c725a1b53a309f2ecb9a14d90108956b"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.879271 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.879299 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://692448793cb3875179e40be06590edd3c725a1b53a309f2ecb9a14d90108956b" gracePeriod=30 Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.880026 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvf5d" event={"ID":"003f8807-75a6-44d8-a9e0-5e4ec301af9c","Type":"ContainerStarted","Data":"4fb79c5e5099da234d604b35fe05a4fd334e92c68c0168446529956ad9cf25f1"} Jan 31 17:15:39 crc kubenswrapper[4730]: I0131 17:15:39.886442 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.176:8080/healthcheck\": EOF" Jan 31 17:15:40 crc kubenswrapper[4730]: E0131 17:15:40.293022 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:15:40 crc kubenswrapper[4730]: I0131 17:15:40.660278 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:15:40 crc kubenswrapper[4730]: I0131 17:15:40.887578 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="692448793cb3875179e40be06590edd3c725a1b53a309f2ecb9a14d90108956b" exitCode=0 Jan 31 17:15:40 crc kubenswrapper[4730]: I0131 17:15:40.887662 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"692448793cb3875179e40be06590edd3c725a1b53a309f2ecb9a14d90108956b"} Jan 31 17:15:40 crc kubenswrapper[4730]: I0131 17:15:40.888085 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f"} Jan 31 17:15:40 crc kubenswrapper[4730]: I0131 17:15:40.888110 4730 scope.go:117] "RemoveContainer" containerID="c1112e9043285915a77db5b9e5d13fd88cb9194466654b35f982d16d377db18f" Jan 31 17:15:40 crc kubenswrapper[4730]: I0131 17:15:40.888498 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:15:40 crc kubenswrapper[4730]: I0131 17:15:40.889020 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:15:40 crc kubenswrapper[4730]: E0131 17:15:40.889287 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:15:41 crc kubenswrapper[4730]: I0131 17:15:41.552759 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:15:41 crc kubenswrapper[4730]: E0131 17:15:41.552901 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 17:15:41 crc kubenswrapper[4730]: E0131 17:15:41.552956 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 17:17:43.552941074 +0000 UTC m=+2850.358997990 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 17:15:41 crc kubenswrapper[4730]: I0131 17:15:41.901968 4730 generic.go:334] "Generic (PLEG): container finished" podID="003f8807-75a6-44d8-a9e0-5e4ec301af9c" containerID="4fb79c5e5099da234d604b35fe05a4fd334e92c68c0168446529956ad9cf25f1" exitCode=0 Jan 31 17:15:41 crc kubenswrapper[4730]: I0131 17:15:41.902224 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvf5d" event={"ID":"003f8807-75a6-44d8-a9e0-5e4ec301af9c","Type":"ContainerDied","Data":"4fb79c5e5099da234d604b35fe05a4fd334e92c68c0168446529956ad9cf25f1"} Jan 31 17:15:41 crc kubenswrapper[4730]: I0131 17:15:41.915465 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:15:41 crc kubenswrapper[4730]: E0131 17:15:41.915652 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:15:41 crc kubenswrapper[4730]: I0131 17:15:41.921646 4730 scope.go:117] "RemoveContainer" containerID="ab5d64ae10400ba0b9491f8991adc5a601b3532bafc3e3e123b49da1929b68d9" Jan 31 17:15:42 crc kubenswrapper[4730]: I0131 17:15:42.926770 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvf5d" event={"ID":"003f8807-75a6-44d8-a9e0-5e4ec301af9c","Type":"ContainerStarted","Data":"644fd2795659f9dfd44d7575f1d5dc67c48301c1611c891c98936ed58a2b627c"} Jan 31 17:15:42 crc kubenswrapper[4730]: I0131 17:15:42.945782 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mvf5d" podStartSLOduration=2.400114682 podStartE2EDuration="5.945766901s" podCreationTimestamp="2026-01-31 17:15:37 +0000 UTC" firstStartedPulling="2026-01-31 17:15:38.869003839 +0000 UTC m=+2725.675060755" lastFinishedPulling="2026-01-31 17:15:42.414656068 +0000 UTC m=+2729.220712974" observedRunningTime="2026-01-31 17:15:42.943126097 +0000 UTC m=+2729.749183013" watchObservedRunningTime="2026-01-31 17:15:42.945766901 +0000 UTC m=+2729.751823817" Jan 31 17:15:43 crc kubenswrapper[4730]: I0131 17:15:43.943851 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" exitCode=1 Jan 31 17:15:43 crc kubenswrapper[4730]: I0131 17:15:43.943924 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457"} Jan 31 17:15:43 crc kubenswrapper[4730]: I0131 17:15:43.944192 4730 scope.go:117] "RemoveContainer" containerID="5911699a6649864673e82811a13e8c8a61cd7d9cbd9a056f64c800b9db1d4cd1" Jan 31 17:15:43 crc kubenswrapper[4730]: I0131 17:15:43.944939 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:15:43 crc kubenswrapper[4730]: I0131 17:15:43.944994 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:15:43 crc kubenswrapper[4730]: I0131 17:15:43.945064 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:15:43 crc kubenswrapper[4730]: I0131 17:15:43.945082 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:15:43 crc kubenswrapper[4730]: E0131 17:15:43.945431 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:15:45 crc kubenswrapper[4730]: I0131 17:15:45.660631 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:15:45 crc kubenswrapper[4730]: I0131 17:15:45.667684 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:15:47 crc kubenswrapper[4730]: I0131 17:15:47.546391 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mvf5d" Jan 31 17:15:47 crc kubenswrapper[4730]: I0131 17:15:47.546694 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mvf5d" Jan 31 17:15:48 crc kubenswrapper[4730]: I0131 17:15:48.602730 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mvf5d" podUID="003f8807-75a6-44d8-a9e0-5e4ec301af9c" containerName="registry-server" probeResult="failure" output=< Jan 31 17:15:48 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Jan 31 17:15:48 crc kubenswrapper[4730]: > Jan 31 17:15:48 crc kubenswrapper[4730]: I0131 17:15:48.660039 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:15:50 crc kubenswrapper[4730]: I0131 17:15:50.658344 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:15:51 crc kubenswrapper[4730]: I0131 17:15:51.657236 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:15:51 crc kubenswrapper[4730]: I0131 17:15:51.657595 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:15:51 crc kubenswrapper[4730]: I0131 17:15:51.658418 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 17:15:51 crc kubenswrapper[4730]: I0131 17:15:51.658443 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:15:51 crc kubenswrapper[4730]: I0131 17:15:51.658472 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" gracePeriod=30 Jan 31 17:15:51 crc kubenswrapper[4730]: I0131 17:15:51.667916 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:15:52 crc kubenswrapper[4730]: I0131 17:15:52.076074 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" exitCode=0 Jan 31 17:15:52 crc kubenswrapper[4730]: I0131 17:15:52.076114 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f"} Jan 31 17:15:52 crc kubenswrapper[4730]: I0131 17:15:52.076217 4730 scope.go:117] "RemoveContainer" containerID="692448793cb3875179e40be06590edd3c725a1b53a309f2ecb9a14d90108956b" Jan 31 17:15:53 crc kubenswrapper[4730]: E0131 17:15:53.682596 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:15:54 crc kubenswrapper[4730]: I0131 17:15:54.090452 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rhb8m/crc-debug-r6hmh" event={"ID":"32f1c59b-121a-498a-80fe-b71cdb290908","Type":"ContainerStarted","Data":"b6634ae31ed9ba787de20e982b75a2b0d8d6c6c04b4efb4619bdccc80d9927cf"} Jan 31 17:15:54 crc kubenswrapper[4730]: I0131 17:15:54.093270 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:15:54 crc kubenswrapper[4730]: I0131 17:15:54.093306 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:15:54 crc kubenswrapper[4730]: E0131 17:15:54.093596 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:15:54 crc kubenswrapper[4730]: I0131 17:15:54.107235 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rhb8m/crc-debug-r6hmh" podStartSLOduration=0.968565212 podStartE2EDuration="15.107211467s" podCreationTimestamp="2026-01-31 17:15:39 +0000 UTC" firstStartedPulling="2026-01-31 17:15:39.595916676 +0000 UTC m=+2726.401973592" lastFinishedPulling="2026-01-31 17:15:53.734562931 +0000 UTC m=+2740.540619847" observedRunningTime="2026-01-31 17:15:54.10408094 +0000 UTC m=+2740.910137856" watchObservedRunningTime="2026-01-31 17:15:54.107211467 +0000 UTC m=+2740.913268383" Jan 31 17:15:55 crc kubenswrapper[4730]: E0131 17:15:55.811762 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 17:15:56 crc kubenswrapper[4730]: I0131 17:15:56.108090 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:15:56 crc kubenswrapper[4730]: I0131 17:15:56.468661 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:15:56 crc kubenswrapper[4730]: I0131 17:15:56.469213 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:15:56 crc kubenswrapper[4730]: I0131 17:15:56.469385 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:15:56 crc kubenswrapper[4730]: I0131 17:15:56.469399 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:15:56 crc kubenswrapper[4730]: E0131 17:15:56.470188 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:15:57 crc kubenswrapper[4730]: I0131 17:15:57.594007 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mvf5d" Jan 31 17:15:57 crc kubenswrapper[4730]: I0131 17:15:57.659964 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mvf5d" Jan 31 17:15:57 crc kubenswrapper[4730]: I0131 17:15:57.833166 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mvf5d"] Jan 31 17:15:59 crc kubenswrapper[4730]: I0131 17:15:59.130940 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mvf5d" podUID="003f8807-75a6-44d8-a9e0-5e4ec301af9c" containerName="registry-server" containerID="cri-o://644fd2795659f9dfd44d7575f1d5dc67c48301c1611c891c98936ed58a2b627c" gracePeriod=2 Jan 31 17:15:59 crc kubenswrapper[4730]: E0131 17:15:59.236424 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod003f8807_75a6_44d8_a9e0_5e4ec301af9c.slice/crio-644fd2795659f9dfd44d7575f1d5dc67c48301c1611c891c98936ed58a2b627c.scope\": RecentStats: unable to find data in memory cache]" Jan 31 17:16:00 crc kubenswrapper[4730]: I0131 17:16:00.139512 4730 generic.go:334] "Generic (PLEG): container finished" podID="003f8807-75a6-44d8-a9e0-5e4ec301af9c" containerID="644fd2795659f9dfd44d7575f1d5dc67c48301c1611c891c98936ed58a2b627c" exitCode=0 Jan 31 17:16:00 crc kubenswrapper[4730]: I0131 17:16:00.139583 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvf5d" event={"ID":"003f8807-75a6-44d8-a9e0-5e4ec301af9c","Type":"ContainerDied","Data":"644fd2795659f9dfd44d7575f1d5dc67c48301c1611c891c98936ed58a2b627c"} Jan 31 17:16:03 crc kubenswrapper[4730]: I0131 17:16:03.107775 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvf5d" Jan 31 17:16:03 crc kubenswrapper[4730]: I0131 17:16:03.167887 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvf5d" event={"ID":"003f8807-75a6-44d8-a9e0-5e4ec301af9c","Type":"ContainerDied","Data":"dcab494daa2253a0c986f22c9fcc472236f6a8da40d0bf768d7b7877fe87ee9a"} Jan 31 17:16:03 crc kubenswrapper[4730]: I0131 17:16:03.167938 4730 scope.go:117] "RemoveContainer" containerID="644fd2795659f9dfd44d7575f1d5dc67c48301c1611c891c98936ed58a2b627c" Jan 31 17:16:03 crc kubenswrapper[4730]: I0131 17:16:03.168059 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvf5d" Jan 31 17:16:03 crc kubenswrapper[4730]: I0131 17:16:03.230693 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/003f8807-75a6-44d8-a9e0-5e4ec301af9c-catalog-content\") pod \"003f8807-75a6-44d8-a9e0-5e4ec301af9c\" (UID: \"003f8807-75a6-44d8-a9e0-5e4ec301af9c\") " Jan 31 17:16:03 crc kubenswrapper[4730]: I0131 17:16:03.230755 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6zfj\" (UniqueName: \"kubernetes.io/projected/003f8807-75a6-44d8-a9e0-5e4ec301af9c-kube-api-access-q6zfj\") pod \"003f8807-75a6-44d8-a9e0-5e4ec301af9c\" (UID: \"003f8807-75a6-44d8-a9e0-5e4ec301af9c\") " Jan 31 17:16:03 crc kubenswrapper[4730]: I0131 17:16:03.230823 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/003f8807-75a6-44d8-a9e0-5e4ec301af9c-utilities\") pod \"003f8807-75a6-44d8-a9e0-5e4ec301af9c\" (UID: \"003f8807-75a6-44d8-a9e0-5e4ec301af9c\") " Jan 31 17:16:03 crc kubenswrapper[4730]: I0131 17:16:03.231511 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/003f8807-75a6-44d8-a9e0-5e4ec301af9c-utilities" (OuterVolumeSpecName: "utilities") pod "003f8807-75a6-44d8-a9e0-5e4ec301af9c" (UID: "003f8807-75a6-44d8-a9e0-5e4ec301af9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 17:16:03 crc kubenswrapper[4730]: I0131 17:16:03.241592 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/003f8807-75a6-44d8-a9e0-5e4ec301af9c-kube-api-access-q6zfj" (OuterVolumeSpecName: "kube-api-access-q6zfj") pod "003f8807-75a6-44d8-a9e0-5e4ec301af9c" (UID: "003f8807-75a6-44d8-a9e0-5e4ec301af9c"). InnerVolumeSpecName "kube-api-access-q6zfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 17:16:03 crc kubenswrapper[4730]: I0131 17:16:03.302654 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/003f8807-75a6-44d8-a9e0-5e4ec301af9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "003f8807-75a6-44d8-a9e0-5e4ec301af9c" (UID: "003f8807-75a6-44d8-a9e0-5e4ec301af9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 17:16:03 crc kubenswrapper[4730]: I0131 17:16:03.333524 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/003f8807-75a6-44d8-a9e0-5e4ec301af9c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 17:16:03 crc kubenswrapper[4730]: I0131 17:16:03.333734 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6zfj\" (UniqueName: \"kubernetes.io/projected/003f8807-75a6-44d8-a9e0-5e4ec301af9c-kube-api-access-q6zfj\") on node \"crc\" DevicePath \"\"" Jan 31 17:16:03 crc kubenswrapper[4730]: I0131 17:16:03.333832 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/003f8807-75a6-44d8-a9e0-5e4ec301af9c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 17:16:03 crc kubenswrapper[4730]: I0131 17:16:03.526525 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mvf5d"] Jan 31 17:16:03 crc kubenswrapper[4730]: I0131 17:16:03.549299 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mvf5d"] Jan 31 17:16:04 crc kubenswrapper[4730]: I0131 17:16:04.469192 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:16:04 crc kubenswrapper[4730]: I0131 17:16:04.469222 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:16:04 crc kubenswrapper[4730]: E0131 17:16:04.469419 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:16:04 crc kubenswrapper[4730]: I0131 17:16:04.474037 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="003f8807-75a6-44d8-a9e0-5e4ec301af9c" path="/var/lib/kubelet/pods/003f8807-75a6-44d8-a9e0-5e4ec301af9c/volumes" Jan 31 17:16:05 crc kubenswrapper[4730]: I0131 17:16:05.580164 4730 scope.go:117] "RemoveContainer" containerID="4fb79c5e5099da234d604b35fe05a4fd334e92c68c0168446529956ad9cf25f1" Jan 31 17:16:05 crc kubenswrapper[4730]: I0131 17:16:05.667881 4730 scope.go:117] "RemoveContainer" containerID="21ee4ccc4f36eb261c39a1b3535914b8b33335a73dc228637f9ce818c02e39b6" Jan 31 17:16:08 crc kubenswrapper[4730]: I0131 17:16:08.464332 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:16:08 crc kubenswrapper[4730]: I0131 17:16:08.464869 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:16:08 crc kubenswrapper[4730]: I0131 17:16:08.464945 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:16:08 crc kubenswrapper[4730]: I0131 17:16:08.464952 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:16:08 crc kubenswrapper[4730]: E0131 17:16:08.465239 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:16:13 crc kubenswrapper[4730]: I0131 17:16:13.250932 4730 generic.go:334] "Generic (PLEG): container finished" podID="32f1c59b-121a-498a-80fe-b71cdb290908" containerID="b6634ae31ed9ba787de20e982b75a2b0d8d6c6c04b4efb4619bdccc80d9927cf" exitCode=0 Jan 31 17:16:13 crc kubenswrapper[4730]: I0131 17:16:13.251018 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rhb8m/crc-debug-r6hmh" event={"ID":"32f1c59b-121a-498a-80fe-b71cdb290908","Type":"ContainerDied","Data":"b6634ae31ed9ba787de20e982b75a2b0d8d6c6c04b4efb4619bdccc80d9927cf"} Jan 31 17:16:14 crc kubenswrapper[4730]: I0131 17:16:14.360428 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rhb8m/crc-debug-r6hmh" Jan 31 17:16:14 crc kubenswrapper[4730]: I0131 17:16:14.388177 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rhb8m/crc-debug-r6hmh"] Jan 31 17:16:14 crc kubenswrapper[4730]: I0131 17:16:14.393890 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rhb8m/crc-debug-r6hmh"] Jan 31 17:16:14 crc kubenswrapper[4730]: I0131 17:16:14.446305 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hncs7\" (UniqueName: \"kubernetes.io/projected/32f1c59b-121a-498a-80fe-b71cdb290908-kube-api-access-hncs7\") pod \"32f1c59b-121a-498a-80fe-b71cdb290908\" (UID: \"32f1c59b-121a-498a-80fe-b71cdb290908\") " Jan 31 17:16:14 crc kubenswrapper[4730]: I0131 17:16:14.446531 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32f1c59b-121a-498a-80fe-b71cdb290908-host\") pod \"32f1c59b-121a-498a-80fe-b71cdb290908\" (UID: \"32f1c59b-121a-498a-80fe-b71cdb290908\") " Jan 31 17:16:14 crc kubenswrapper[4730]: I0131 17:16:14.446632 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32f1c59b-121a-498a-80fe-b71cdb290908-host" (OuterVolumeSpecName: "host") pod "32f1c59b-121a-498a-80fe-b71cdb290908" (UID: "32f1c59b-121a-498a-80fe-b71cdb290908"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 17:16:14 crc kubenswrapper[4730]: I0131 17:16:14.447019 4730 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32f1c59b-121a-498a-80fe-b71cdb290908-host\") on node \"crc\" DevicePath \"\"" Jan 31 17:16:14 crc kubenswrapper[4730]: I0131 17:16:14.461670 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32f1c59b-121a-498a-80fe-b71cdb290908-kube-api-access-hncs7" (OuterVolumeSpecName: "kube-api-access-hncs7") pod "32f1c59b-121a-498a-80fe-b71cdb290908" (UID: "32f1c59b-121a-498a-80fe-b71cdb290908"). InnerVolumeSpecName "kube-api-access-hncs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 17:16:14 crc kubenswrapper[4730]: I0131 17:16:14.478373 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32f1c59b-121a-498a-80fe-b71cdb290908" path="/var/lib/kubelet/pods/32f1c59b-121a-498a-80fe-b71cdb290908/volumes" Jan 31 17:16:14 crc kubenswrapper[4730]: I0131 17:16:14.549154 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hncs7\" (UniqueName: \"kubernetes.io/projected/32f1c59b-121a-498a-80fe-b71cdb290908-kube-api-access-hncs7\") on node \"crc\" DevicePath \"\"" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.269592 4730 scope.go:117] "RemoveContainer" containerID="b6634ae31ed9ba787de20e982b75a2b0d8d6c6c04b4efb4619bdccc80d9927cf" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.269653 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rhb8m/crc-debug-r6hmh" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.603391 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rhb8m/crc-debug-5m8d2"] Jan 31 17:16:15 crc kubenswrapper[4730]: E0131 17:16:15.603844 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="003f8807-75a6-44d8-a9e0-5e4ec301af9c" containerName="extract-utilities" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.603864 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="003f8807-75a6-44d8-a9e0-5e4ec301af9c" containerName="extract-utilities" Jan 31 17:16:15 crc kubenswrapper[4730]: E0131 17:16:15.603909 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="003f8807-75a6-44d8-a9e0-5e4ec301af9c" containerName="registry-server" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.603917 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="003f8807-75a6-44d8-a9e0-5e4ec301af9c" containerName="registry-server" Jan 31 17:16:15 crc kubenswrapper[4730]: E0131 17:16:15.603938 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="003f8807-75a6-44d8-a9e0-5e4ec301af9c" containerName="extract-content" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.603943 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="003f8807-75a6-44d8-a9e0-5e4ec301af9c" containerName="extract-content" Jan 31 17:16:15 crc kubenswrapper[4730]: E0131 17:16:15.603957 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32f1c59b-121a-498a-80fe-b71cdb290908" containerName="container-00" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.603963 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f1c59b-121a-498a-80fe-b71cdb290908" containerName="container-00" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.604198 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="003f8807-75a6-44d8-a9e0-5e4ec301af9c" containerName="registry-server" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.604214 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="32f1c59b-121a-498a-80fe-b71cdb290908" containerName="container-00" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.605051 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rhb8m/crc-debug-5m8d2" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.606574 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rhb8m"/"default-dockercfg-srr4v" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.670097 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36757278-3fc9-42d9-9d62-459a86336957-host\") pod \"crc-debug-5m8d2\" (UID: \"36757278-3fc9-42d9-9d62-459a86336957\") " pod="openshift-must-gather-rhb8m/crc-debug-5m8d2" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.670188 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btm2x\" (UniqueName: \"kubernetes.io/projected/36757278-3fc9-42d9-9d62-459a86336957-kube-api-access-btm2x\") pod \"crc-debug-5m8d2\" (UID: \"36757278-3fc9-42d9-9d62-459a86336957\") " pod="openshift-must-gather-rhb8m/crc-debug-5m8d2" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.771677 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btm2x\" (UniqueName: \"kubernetes.io/projected/36757278-3fc9-42d9-9d62-459a86336957-kube-api-access-btm2x\") pod \"crc-debug-5m8d2\" (UID: \"36757278-3fc9-42d9-9d62-459a86336957\") " pod="openshift-must-gather-rhb8m/crc-debug-5m8d2" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.772037 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36757278-3fc9-42d9-9d62-459a86336957-host\") pod \"crc-debug-5m8d2\" (UID: \"36757278-3fc9-42d9-9d62-459a86336957\") " pod="openshift-must-gather-rhb8m/crc-debug-5m8d2" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.772122 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36757278-3fc9-42d9-9d62-459a86336957-host\") pod \"crc-debug-5m8d2\" (UID: \"36757278-3fc9-42d9-9d62-459a86336957\") " pod="openshift-must-gather-rhb8m/crc-debug-5m8d2" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.795092 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btm2x\" (UniqueName: \"kubernetes.io/projected/36757278-3fc9-42d9-9d62-459a86336957-kube-api-access-btm2x\") pod \"crc-debug-5m8d2\" (UID: \"36757278-3fc9-42d9-9d62-459a86336957\") " pod="openshift-must-gather-rhb8m/crc-debug-5m8d2" Jan 31 17:16:15 crc kubenswrapper[4730]: I0131 17:16:15.921552 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rhb8m/crc-debug-5m8d2" Jan 31 17:16:15 crc kubenswrapper[4730]: W0131 17:16:15.961010 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36757278_3fc9_42d9_9d62_459a86336957.slice/crio-70cc59496aacbe83667d1869e6c98a946c36372bf93b10eff787256769678f22 WatchSource:0}: Error finding container 70cc59496aacbe83667d1869e6c98a946c36372bf93b10eff787256769678f22: Status 404 returned error can't find the container with id 70cc59496aacbe83667d1869e6c98a946c36372bf93b10eff787256769678f22 Jan 31 17:16:16 crc kubenswrapper[4730]: I0131 17:16:16.280211 4730 generic.go:334] "Generic (PLEG): container finished" podID="36757278-3fc9-42d9-9d62-459a86336957" containerID="fdb8dd665e3fd6cfa857848887bd6273417b120b239e519a8f71bd63bb0a7811" exitCode=1 Jan 31 17:16:16 crc kubenswrapper[4730]: I0131 17:16:16.280279 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rhb8m/crc-debug-5m8d2" event={"ID":"36757278-3fc9-42d9-9d62-459a86336957","Type":"ContainerDied","Data":"fdb8dd665e3fd6cfa857848887bd6273417b120b239e519a8f71bd63bb0a7811"} Jan 31 17:16:16 crc kubenswrapper[4730]: I0131 17:16:16.280568 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rhb8m/crc-debug-5m8d2" event={"ID":"36757278-3fc9-42d9-9d62-459a86336957","Type":"ContainerStarted","Data":"70cc59496aacbe83667d1869e6c98a946c36372bf93b10eff787256769678f22"} Jan 31 17:16:16 crc kubenswrapper[4730]: I0131 17:16:16.317908 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rhb8m/crc-debug-5m8d2"] Jan 31 17:16:16 crc kubenswrapper[4730]: I0131 17:16:16.324149 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rhb8m/crc-debug-5m8d2"] Jan 31 17:16:17 crc kubenswrapper[4730]: I0131 17:16:17.375743 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rhb8m/crc-debug-5m8d2" Jan 31 17:16:17 crc kubenswrapper[4730]: I0131 17:16:17.504039 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36757278-3fc9-42d9-9d62-459a86336957-host\") pod \"36757278-3fc9-42d9-9d62-459a86336957\" (UID: \"36757278-3fc9-42d9-9d62-459a86336957\") " Jan 31 17:16:17 crc kubenswrapper[4730]: I0131 17:16:17.504124 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btm2x\" (UniqueName: \"kubernetes.io/projected/36757278-3fc9-42d9-9d62-459a86336957-kube-api-access-btm2x\") pod \"36757278-3fc9-42d9-9d62-459a86336957\" (UID: \"36757278-3fc9-42d9-9d62-459a86336957\") " Jan 31 17:16:17 crc kubenswrapper[4730]: I0131 17:16:17.504110 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36757278-3fc9-42d9-9d62-459a86336957-host" (OuterVolumeSpecName: "host") pod "36757278-3fc9-42d9-9d62-459a86336957" (UID: "36757278-3fc9-42d9-9d62-459a86336957"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 17:16:17 crc kubenswrapper[4730]: I0131 17:16:17.504689 4730 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36757278-3fc9-42d9-9d62-459a86336957-host\") on node \"crc\" DevicePath \"\"" Jan 31 17:16:17 crc kubenswrapper[4730]: I0131 17:16:17.509994 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36757278-3fc9-42d9-9d62-459a86336957-kube-api-access-btm2x" (OuterVolumeSpecName: "kube-api-access-btm2x") pod "36757278-3fc9-42d9-9d62-459a86336957" (UID: "36757278-3fc9-42d9-9d62-459a86336957"). InnerVolumeSpecName "kube-api-access-btm2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 17:16:17 crc kubenswrapper[4730]: I0131 17:16:17.606255 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btm2x\" (UniqueName: \"kubernetes.io/projected/36757278-3fc9-42d9-9d62-459a86336957-kube-api-access-btm2x\") on node \"crc\" DevicePath \"\"" Jan 31 17:16:18 crc kubenswrapper[4730]: I0131 17:16:18.296050 4730 scope.go:117] "RemoveContainer" containerID="fdb8dd665e3fd6cfa857848887bd6273417b120b239e519a8f71bd63bb0a7811" Jan 31 17:16:18 crc kubenswrapper[4730]: I0131 17:16:18.296151 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rhb8m/crc-debug-5m8d2" Jan 31 17:16:18 crc kubenswrapper[4730]: I0131 17:16:18.476978 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36757278-3fc9-42d9-9d62-459a86336957" path="/var/lib/kubelet/pods/36757278-3fc9-42d9-9d62-459a86336957/volumes" Jan 31 17:16:19 crc kubenswrapper[4730]: I0131 17:16:19.464557 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:16:19 crc kubenswrapper[4730]: I0131 17:16:19.464710 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:16:19 crc kubenswrapper[4730]: E0131 17:16:19.465027 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:16:22 crc kubenswrapper[4730]: I0131 17:16:22.465120 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:16:22 crc kubenswrapper[4730]: I0131 17:16:22.465428 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:16:22 crc kubenswrapper[4730]: I0131 17:16:22.465500 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:16:22 crc kubenswrapper[4730]: I0131 17:16:22.465507 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:16:22 crc kubenswrapper[4730]: E0131 17:16:22.465908 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:16:32 crc kubenswrapper[4730]: I0131 17:16:32.464464 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:16:32 crc kubenswrapper[4730]: I0131 17:16:32.465058 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:16:32 crc kubenswrapper[4730]: E0131 17:16:32.465291 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:16:35 crc kubenswrapper[4730]: I0131 17:16:35.464665 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:16:35 crc kubenswrapper[4730]: I0131 17:16:35.465011 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:16:35 crc kubenswrapper[4730]: I0131 17:16:35.465082 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:16:35 crc kubenswrapper[4730]: I0131 17:16:35.465090 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:16:35 crc kubenswrapper[4730]: E0131 17:16:35.465360 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:16:45 crc kubenswrapper[4730]: I0131 17:16:45.464639 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:16:45 crc kubenswrapper[4730]: I0131 17:16:45.465303 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:16:45 crc kubenswrapper[4730]: E0131 17:16:45.465855 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:16:46 crc kubenswrapper[4730]: I0131 17:16:46.464393 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:16:46 crc kubenswrapper[4730]: I0131 17:16:46.464730 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:16:46 crc kubenswrapper[4730]: I0131 17:16:46.464818 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:16:46 crc kubenswrapper[4730]: I0131 17:16:46.464826 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:16:46 crc kubenswrapper[4730]: E0131 17:16:46.465183 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:16:47 crc kubenswrapper[4730]: I0131 17:16:47.058388 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-dc5f7996-jrfrx_fd701548-630f-4a34-be15-e97ed8699a34/barbican-api/0.log" Jan 31 17:16:47 crc kubenswrapper[4730]: I0131 17:16:47.262321 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-dc5f7996-jrfrx_fd701548-630f-4a34-be15-e97ed8699a34/barbican-api-log/0.log" Jan 31 17:16:47 crc kubenswrapper[4730]: I0131 17:16:47.283785 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7ffbbc76b4-9vr9z_73aa808b-e690-4e00-b458-4d30965fe1f8/barbican-keystone-listener/0.log" Jan 31 17:16:47 crc kubenswrapper[4730]: I0131 17:16:47.370594 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7ffbbc76b4-9vr9z_73aa808b-e690-4e00-b458-4d30965fe1f8/barbican-keystone-listener-log/0.log" Jan 31 17:16:47 crc kubenswrapper[4730]: I0131 17:16:47.531509 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-74c8bcbdc9-xg47w_de24c449-9dfc-4e52-b571-ce305a73a1a7/barbican-worker/0.log" Jan 31 17:16:47 crc kubenswrapper[4730]: I0131 17:16:47.587567 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-74c8bcbdc9-xg47w_de24c449-9dfc-4e52-b571-ce305a73a1a7/barbican-worker-log/0.log" Jan 31 17:16:47 crc kubenswrapper[4730]: I0131 17:16:47.767452 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f64b5463-38cd-4c71-b9ea-ce3c348f6b06/proxy-httpd/0.log" Jan 31 17:16:47 crc kubenswrapper[4730]: I0131 17:16:47.806075 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f64b5463-38cd-4c71-b9ea-ce3c348f6b06/ceilometer-central-agent/0.log" Jan 31 17:16:47 crc kubenswrapper[4730]: I0131 17:16:47.821631 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f64b5463-38cd-4c71-b9ea-ce3c348f6b06/ceilometer-notification-agent/0.log" Jan 31 17:16:47 crc kubenswrapper[4730]: I0131 17:16:47.883028 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f64b5463-38cd-4c71-b9ea-ce3c348f6b06/sg-core/0.log" Jan 31 17:16:48 crc kubenswrapper[4730]: I0131 17:16:48.002152 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fb708c6f-d3c0-4b3c-a4d9-48b759f11153/cinder-api/0.log" Jan 31 17:16:48 crc kubenswrapper[4730]: I0131 17:16:48.064173 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fb708c6f-d3c0-4b3c-a4d9-48b759f11153/cinder-api-log/0.log" Jan 31 17:16:48 crc kubenswrapper[4730]: I0131 17:16:48.241213 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f328aa35-7979-4ff9-ab15-57e088728259/probe/0.log" Jan 31 17:16:48 crc kubenswrapper[4730]: I0131 17:16:48.247096 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f328aa35-7979-4ff9-ab15-57e088728259/cinder-scheduler/0.log" Jan 31 17:16:48 crc kubenswrapper[4730]: I0131 17:16:48.347480 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-95bd95597-lwsxh_6357893e-9e12-47db-a262-966a020b4aa2/init/0.log" Jan 31 17:16:48 crc kubenswrapper[4730]: I0131 17:16:48.538627 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-95bd95597-lwsxh_6357893e-9e12-47db-a262-966a020b4aa2/init/0.log" Jan 31 17:16:48 crc kubenswrapper[4730]: I0131 17:16:48.597061 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-95bd95597-lwsxh_6357893e-9e12-47db-a262-966a020b4aa2/dnsmasq-dns/0.log" Jan 31 17:16:48 crc kubenswrapper[4730]: I0131 17:16:48.607636 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f25fee22-a834-4f4b-82f3-fc6deea85888/glance-httpd/0.log" Jan 31 17:16:48 crc kubenswrapper[4730]: I0131 17:16:48.733877 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f25fee22-a834-4f4b-82f3-fc6deea85888/glance-log/0.log" Jan 31 17:16:48 crc kubenswrapper[4730]: I0131 17:16:48.804537 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_78823119-dbb2-462d-8c77-b9df0742a7a9/glance-httpd/0.log" Jan 31 17:16:48 crc kubenswrapper[4730]: I0131 17:16:48.851278 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_78823119-dbb2-462d-8c77-b9df0742a7a9/glance-log/0.log" Jan 31 17:16:49 crc kubenswrapper[4730]: I0131 17:16:49.086435 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7788464654-cr95d_0374cd2d-1d23-4f00-893a-278af887d99b/horizon/1.log" Jan 31 17:16:49 crc kubenswrapper[4730]: I0131 17:16:49.245322 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7788464654-cr95d_0374cd2d-1d23-4f00-893a-278af887d99b/horizon/0.log" Jan 31 17:16:49 crc kubenswrapper[4730]: I0131 17:16:49.308192 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7788464654-cr95d_0374cd2d-1d23-4f00-893a-278af887d99b/horizon-log/0.log" Jan 31 17:16:49 crc kubenswrapper[4730]: I0131 17:16:49.425823 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5b54468f66-vfdd4_54eaed65-bddf-4e89-be4e-54386d1a6768/keystone-api/0.log" Jan 31 17:16:49 crc kubenswrapper[4730]: I0131 17:16:49.514735 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29497981-dss2z_e2480e28-9925-4151-90a2-8db7d28e20f3/keystone-cron/0.log" Jan 31 17:16:49 crc kubenswrapper[4730]: I0131 17:16:49.627102 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c494d989-7c60-42f1-91ee-625a507f93d6/kube-state-metrics/0.log" Jan 31 17:16:49 crc kubenswrapper[4730]: I0131 17:16:49.928894 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-c4d975ccf-jbdgk_ce037144-daeb-412d-94f1-69bc4ed97935/neutron-api/0.log" Jan 31 17:16:50 crc kubenswrapper[4730]: I0131 17:16:50.212566 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-c4d975ccf-jbdgk_ce037144-daeb-412d-94f1-69bc4ed97935/neutron-httpd/0.log" Jan 31 17:16:50 crc kubenswrapper[4730]: I0131 17:16:50.553620 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_63a7e1f3-1bc8-429e-a94c-729bc81d12ac/nova-api-api/0.log" Jan 31 17:16:50 crc kubenswrapper[4730]: I0131 17:16:50.582637 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_63a7e1f3-1bc8-429e-a94c-729bc81d12ac/nova-api-log/0.log" Jan 31 17:16:50 crc kubenswrapper[4730]: I0131 17:16:50.894202 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d10fc2fe-5518-491d-bc51-7f8a1c7c7885/nova-cell0-conductor-conductor/0.log" Jan 31 17:16:50 crc kubenswrapper[4730]: I0131 17:16:50.908096 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c261dee9-9004-49c9-be31-6571f30f8dbc/nova-cell1-conductor-conductor/0.log" Jan 31 17:16:51 crc kubenswrapper[4730]: I0131 17:16:51.292246 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_1debbac8-6d45-417c-a365-5fbe9f123d58/nova-cell1-novncproxy-novncproxy/0.log" Jan 31 17:16:51 crc kubenswrapper[4730]: I0131 17:16:51.557297 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2df710e8-90c4-40a0-adb4-cfac0c1333cb/nova-metadata-log/0.log" Jan 31 17:16:51 crc kubenswrapper[4730]: I0131 17:16:51.801012 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_c76d57fa-01c5-40f7-8dbb-317f6adcbcc9/nova-scheduler-scheduler/0.log" Jan 31 17:16:51 crc kubenswrapper[4730]: I0131 17:16:51.911412 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_532c157a-5c9c-4043-a85a-5075e5ed9db5/mysql-bootstrap/0.log" Jan 31 17:16:52 crc kubenswrapper[4730]: I0131 17:16:52.068100 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_532c157a-5c9c-4043-a85a-5075e5ed9db5/mysql-bootstrap/0.log" Jan 31 17:16:52 crc kubenswrapper[4730]: I0131 17:16:52.094713 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_532c157a-5c9c-4043-a85a-5075e5ed9db5/galera/0.log" Jan 31 17:16:52 crc kubenswrapper[4730]: I0131 17:16:52.274456 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f96d233a-2c8a-4873-b53b-eb8c3e792160/mysql-bootstrap/0.log" Jan 31 17:16:52 crc kubenswrapper[4730]: I0131 17:16:52.383057 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2df710e8-90c4-40a0-adb4-cfac0c1333cb/nova-metadata-metadata/0.log" Jan 31 17:16:52 crc kubenswrapper[4730]: I0131 17:16:52.561138 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f96d233a-2c8a-4873-b53b-eb8c3e792160/mysql-bootstrap/0.log" Jan 31 17:16:52 crc kubenswrapper[4730]: I0131 17:16:52.628389 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f96d233a-2c8a-4873-b53b-eb8c3e792160/galera/0.log" Jan 31 17:16:52 crc kubenswrapper[4730]: I0131 17:16:52.691737 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7c87bbe1-ba38-4c0c-9a65-3ba268aeb10d/openstackclient/0.log" Jan 31 17:16:53 crc kubenswrapper[4730]: I0131 17:16:53.003246 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gbpkm_1b59c538-9f79-4e4e-9d74-6eb5f1758795/ovn-controller/0.log" Jan 31 17:16:53 crc kubenswrapper[4730]: I0131 17:16:53.050025 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-sw5kq_7445317c-77cd-4b07-b3d9-17f5d07f247d/openstack-network-exporter/0.log" Jan 31 17:16:53 crc kubenswrapper[4730]: I0131 17:16:53.184178 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-88h7f_0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6/ovsdb-server-init/0.log" Jan 31 17:16:53 crc kubenswrapper[4730]: I0131 17:16:53.446618 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-88h7f_0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6/ovs-vswitchd/0.log" Jan 31 17:16:53 crc kubenswrapper[4730]: I0131 17:16:53.505347 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-88h7f_0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6/ovsdb-server-init/0.log" Jan 31 17:16:53 crc kubenswrapper[4730]: I0131 17:16:53.507537 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-88h7f_0d94c4e3-5b95-4cc6-aca8-6fd33b1d9ba6/ovsdb-server/0.log" Jan 31 17:16:53 crc kubenswrapper[4730]: I0131 17:16:53.675397 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6a5af028-91b9-4bfa-a3b9-efa454ff8d31/openstack-network-exporter/0.log" Jan 31 17:16:53 crc kubenswrapper[4730]: I0131 17:16:53.713475 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6a5af028-91b9-4bfa-a3b9-efa454ff8d31/ovn-northd/0.log" Jan 31 17:16:53 crc kubenswrapper[4730]: I0131 17:16:53.812959 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8dcfa71d-54ed-4415-92cf-0dd4133a5c96/openstack-network-exporter/0.log" Jan 31 17:16:53 crc kubenswrapper[4730]: I0131 17:16:53.907892 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8dcfa71d-54ed-4415-92cf-0dd4133a5c96/ovsdbserver-nb/0.log" Jan 31 17:16:54 crc kubenswrapper[4730]: I0131 17:16:54.023594 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b07dc548-3987-41f8-89d8-ca3f94e1b0c1/openstack-network-exporter/0.log" Jan 31 17:16:54 crc kubenswrapper[4730]: I0131 17:16:54.155080 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b07dc548-3987-41f8-89d8-ca3f94e1b0c1/ovsdbserver-sb/0.log" Jan 31 17:16:54 crc kubenswrapper[4730]: I0131 17:16:54.380591 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b6cc64d78-7m9cj_3e510754-1362-4ae1-9934-59a43324b2bf/placement-log/0.log" Jan 31 17:16:54 crc kubenswrapper[4730]: I0131 17:16:54.393117 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b6cc64d78-7m9cj_3e510754-1362-4ae1-9934-59a43324b2bf/placement-api/0.log" Jan 31 17:16:54 crc kubenswrapper[4730]: I0131 17:16:54.525165 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_696f3c30-383d-4a98-ab73-bd90571c8fac/setup-container/0.log" Jan 31 17:16:54 crc kubenswrapper[4730]: I0131 17:16:54.728701 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_696f3c30-383d-4a98-ab73-bd90571c8fac/setup-container/0.log" Jan 31 17:16:54 crc kubenswrapper[4730]: I0131 17:16:54.751632 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_696f3c30-383d-4a98-ab73-bd90571c8fac/rabbitmq/0.log" Jan 31 17:16:54 crc kubenswrapper[4730]: I0131 17:16:54.796750 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda/setup-container/0.log" Jan 31 17:16:55 crc kubenswrapper[4730]: I0131 17:16:55.028412 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda/setup-container/0.log" Jan 31 17:16:55 crc kubenswrapper[4730]: I0131 17:16:55.121674 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5867f46d87-f8rf9_4c3d9aec-6a99-480d-a7f3-0703ac92db04/proxy-httpd/15.log" Jan 31 17:16:55 crc kubenswrapper[4730]: I0131 17:16:55.137491 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3c3f60e3-5c0d-4c7d-a0cc-8d8ec4872eda/rabbitmq/0.log" Jan 31 17:16:55 crc kubenswrapper[4730]: I0131 17:16:55.249660 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5867f46d87-f8rf9_4c3d9aec-6a99-480d-a7f3-0703ac92db04/proxy-httpd/15.log" Jan 31 17:16:55 crc kubenswrapper[4730]: I0131 17:16:55.354166 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5867f46d87-f8rf9_4c3d9aec-6a99-480d-a7f3-0703ac92db04/proxy-server/10.log" Jan 31 17:16:55 crc kubenswrapper[4730]: I0131 17:16:55.426793 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5867f46d87-f8rf9_4c3d9aec-6a99-480d-a7f3-0703ac92db04/proxy-server/10.log" Jan 31 17:16:55 crc kubenswrapper[4730]: I0131 17:16:55.616536 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/account-auditor/0.log" Jan 31 17:16:55 crc kubenswrapper[4730]: I0131 17:16:55.697162 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/account-reaper/0.log" Jan 31 17:16:55 crc kubenswrapper[4730]: I0131 17:16:55.751512 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/account-replicator/10.log" Jan 31 17:16:55 crc kubenswrapper[4730]: I0131 17:16:55.866632 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/account-replicator/10.log" Jan 31 17:16:55 crc kubenswrapper[4730]: I0131 17:16:55.888962 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/account-server/0.log" Jan 31 17:16:55 crc kubenswrapper[4730]: I0131 17:16:55.947159 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/container-auditor/0.log" Jan 31 17:16:56 crc kubenswrapper[4730]: I0131 17:16:56.004147 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/container-replicator/10.log" Jan 31 17:16:56 crc kubenswrapper[4730]: I0131 17:16:56.098069 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/container-replicator/10.log" Jan 31 17:16:56 crc kubenswrapper[4730]: I0131 17:16:56.218714 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/container-server/0.log" Jan 31 17:16:56 crc kubenswrapper[4730]: I0131 17:16:56.225661 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/container-updater/7.log" Jan 31 17:16:56 crc kubenswrapper[4730]: I0131 17:16:56.347339 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/container-updater/6.log" Jan 31 17:16:56 crc kubenswrapper[4730]: I0131 17:16:56.413239 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/object-expirer/10.log" Jan 31 17:16:56 crc kubenswrapper[4730]: I0131 17:16:56.440595 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/object-auditor/0.log" Jan 31 17:16:56 crc kubenswrapper[4730]: I0131 17:16:56.461004 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/object-expirer/10.log" Jan 31 17:16:56 crc kubenswrapper[4730]: I0131 17:16:56.605299 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/object-replicator/0.log" Jan 31 17:16:56 crc kubenswrapper[4730]: I0131 17:16:56.684635 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/object-server/0.log" Jan 31 17:16:56 crc kubenswrapper[4730]: I0131 17:16:56.738705 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/object-updater/8.log" Jan 31 17:16:56 crc kubenswrapper[4730]: I0131 17:16:56.778063 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/object-updater/8.log" Jan 31 17:16:56 crc kubenswrapper[4730]: I0131 17:16:56.906072 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/rsync/0.log" Jan 31 17:16:56 crc kubenswrapper[4730]: I0131 17:16:56.927446 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3656b8f0-e1d3-4214-9c23-dd437a57f2ad/swift-recon-cron/0.log" Jan 31 17:16:58 crc kubenswrapper[4730]: I0131 17:16:58.508684 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:16:58 crc kubenswrapper[4730]: I0131 17:16:58.508709 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:16:58 crc kubenswrapper[4730]: E0131 17:16:58.508996 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:16:59 crc kubenswrapper[4730]: I0131 17:16:59.464432 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:16:59 crc kubenswrapper[4730]: I0131 17:16:59.464758 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:16:59 crc kubenswrapper[4730]: I0131 17:16:59.464912 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:16:59 crc kubenswrapper[4730]: I0131 17:16:59.464922 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:16:59 crc kubenswrapper[4730]: E0131 17:16:59.465244 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:16:59 crc kubenswrapper[4730]: I0131 17:16:59.489421 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_e3fc84d7-b01c-4396-89e2-54684791a14d/memcached/0.log" Jan 31 17:17:11 crc kubenswrapper[4730]: I0131 17:17:11.464326 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:17:11 crc kubenswrapper[4730]: I0131 17:17:11.464791 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:17:11 crc kubenswrapper[4730]: E0131 17:17:11.465278 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:17:12 crc kubenswrapper[4730]: I0131 17:17:12.468815 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:17:12 crc kubenswrapper[4730]: I0131 17:17:12.469136 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:17:12 crc kubenswrapper[4730]: I0131 17:17:12.469205 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:17:12 crc kubenswrapper[4730]: I0131 17:17:12.469212 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:17:12 crc kubenswrapper[4730]: E0131 17:17:12.469500 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:17:20 crc kubenswrapper[4730]: I0131 17:17:20.378931 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-ktcvd_1ecbf8bc-da38-4cc2-8d7e-eef855555957/manager/0.log" Jan 31 17:17:20 crc kubenswrapper[4730]: I0131 17:17:20.548710 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc_9996fc15-d71e-46b8-8ad0-bebb587efa83/util/0.log" Jan 31 17:17:20 crc kubenswrapper[4730]: I0131 17:17:20.806381 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc_9996fc15-d71e-46b8-8ad0-bebb587efa83/pull/0.log" Jan 31 17:17:20 crc kubenswrapper[4730]: I0131 17:17:20.810358 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc_9996fc15-d71e-46b8-8ad0-bebb587efa83/pull/0.log" Jan 31 17:17:20 crc kubenswrapper[4730]: I0131 17:17:20.825262 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc_9996fc15-d71e-46b8-8ad0-bebb587efa83/util/0.log" Jan 31 17:17:21 crc kubenswrapper[4730]: I0131 17:17:21.003213 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc_9996fc15-d71e-46b8-8ad0-bebb587efa83/util/0.log" Jan 31 17:17:21 crc kubenswrapper[4730]: I0131 17:17:21.020620 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc_9996fc15-d71e-46b8-8ad0-bebb587efa83/pull/0.log" Jan 31 17:17:21 crc kubenswrapper[4730]: I0131 17:17:21.124767 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_be4a31d3922b8a8b977cd2aebe9b6dc1309654c42f2eca5af0a645f310t6wjc_9996fc15-d71e-46b8-8ad0-bebb587efa83/extract/0.log" Jan 31 17:17:21 crc kubenswrapper[4730]: I0131 17:17:21.283258 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-bzkp6_13990a08-64f5-47af-a6fb-59b6b547fe7f/manager/0.log" Jan 31 17:17:21 crc kubenswrapper[4730]: I0131 17:17:21.315621 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-v5rrb_5d112f3e-564e-4003-90fe-6472c5643d40/manager/0.log" Jan 31 17:17:21 crc kubenswrapper[4730]: I0131 17:17:21.509434 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-pcvgw_db806e61-96eb-4f21-9521-85c8cca3dbb6/manager/0.log" Jan 31 17:17:21 crc kubenswrapper[4730]: I0131 17:17:21.607146 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-hmbg9_d13ce75a-a1e5-4a49-a46a-514b904c460a/manager/0.log" Jan 31 17:17:21 crc kubenswrapper[4730]: I0131 17:17:21.914501 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-w9r8d_58bb04d3-9031-43d5-b96f-0874d7ad4f79/manager/0.log" Jan 31 17:17:22 crc kubenswrapper[4730]: I0131 17:17:22.280730 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-89f56_58a9ca1b-4bc7-4912-ae16-3210ecea5790/manager/0.log" Jan 31 17:17:22 crc kubenswrapper[4730]: I0131 17:17:22.298006 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-vcgsr_b542fd94-b4bf-44af-8276-7d2e686f5bb4/manager/0.log" Jan 31 17:17:22 crc kubenswrapper[4730]: I0131 17:17:22.500795 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-dl95k_4ffdcf38-ba5f-40c9-aef8-945d0c6bfbb4/manager/0.log" Jan 31 17:17:22 crc kubenswrapper[4730]: I0131 17:17:22.510285 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-4nshr_3cd35794-6a52-452b-9e7b-d1bb4f828dc1/manager/0.log" Jan 31 17:17:22 crc kubenswrapper[4730]: I0131 17:17:22.792032 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-cwqb6_f87d7bd0-a9ff-48fc-991c-09dd2931d5bd/manager/0.log" Jan 31 17:17:22 crc kubenswrapper[4730]: I0131 17:17:22.905850 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-4x5l9_0cfec67f-86ec-4246-9eef-53634c164730/manager/0.log" Jan 31 17:17:23 crc kubenswrapper[4730]: I0131 17:17:23.073845 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-kdldq_7befb81f-95d7-4b23-a23d-2255e67528b0/manager/0.log" Jan 31 17:17:23 crc kubenswrapper[4730]: I0131 17:17:23.159578 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-87zjj_113a73b1-4239-42e9-a168-704da54b2c56/manager/0.log" Jan 31 17:17:23 crc kubenswrapper[4730]: I0131 17:17:23.249650 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4db4gdn_82fbb691-9ea3-473a-9bd7-22489bcfae0a/manager/0.log" Jan 31 17:17:23 crc kubenswrapper[4730]: I0131 17:17:23.534256 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-567cf89b5c-4tqlg_11612ac7-b5f1-4c2f-ab71-2f7a455beedf/operator/0.log" Jan 31 17:17:23 crc kubenswrapper[4730]: I0131 17:17:23.787837 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-62wxs_74db53b1-8fee-4566-8280-8d5e4358ee93/registry-server/0.log" Jan 31 17:17:23 crc kubenswrapper[4730]: I0131 17:17:23.876321 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-j58sp_ae26b53f-3174-4f96-9bc0-ea8be0ce6b72/manager/0.log" Jan 31 17:17:24 crc kubenswrapper[4730]: I0131 17:17:24.110946 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5c77fbfdf8-th7sg_e76dee4f-067c-436f-85c4-0c538a334973/manager/0.log" Jan 31 17:17:24 crc kubenswrapper[4730]: I0131 17:17:24.179461 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-dk9lg_73250cb3-9b05-4102-b306-6c88d4881a23/manager/0.log" Jan 31 17:17:24 crc kubenswrapper[4730]: I0131 17:17:24.351046 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-lj76z_37bb03aa-53be-43db-bcbc-5b0ea10eb72e/operator/0.log" Jan 31 17:17:24 crc kubenswrapper[4730]: I0131 17:17:24.436501 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85df8f7b7c-krdxf_b6911ed2-ca0f-4fed-b5c4-3046ac427b97/manager/0.log" Jan 31 17:17:24 crc kubenswrapper[4730]: I0131 17:17:24.466024 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:17:24 crc kubenswrapper[4730]: I0131 17:17:24.466054 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:17:24 crc kubenswrapper[4730]: I0131 17:17:24.466467 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:17:24 crc kubenswrapper[4730]: E0131 17:17:24.466280 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:17:24 crc kubenswrapper[4730]: I0131 17:17:24.466558 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:17:24 crc kubenswrapper[4730]: I0131 17:17:24.466635 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:17:24 crc kubenswrapper[4730]: I0131 17:17:24.466647 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:17:24 crc kubenswrapper[4730]: E0131 17:17:24.466952 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:17:24 crc kubenswrapper[4730]: I0131 17:17:24.677257 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-zz8nq_e96a04a7-bf1d-4a9d-9cc4-5b193c22f7a5/manager/0.log" Jan 31 17:17:24 crc kubenswrapper[4730]: I0131 17:17:24.899641 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-cd9vd_03b55837-5391-4dc0-88de-aa3b0893e733/manager/0.log" Jan 31 17:17:24 crc kubenswrapper[4730]: I0131 17:17:24.985054 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-g28f6_17116685-ca76-4a23-9b73-04cec9287254/manager/0.log" Jan 31 17:17:36 crc kubenswrapper[4730]: I0131 17:17:36.464733 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:17:36 crc kubenswrapper[4730]: I0131 17:17:36.465215 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:17:36 crc kubenswrapper[4730]: E0131 17:17:36.465484 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:17:39 crc kubenswrapper[4730]: I0131 17:17:39.464478 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:17:39 crc kubenswrapper[4730]: I0131 17:17:39.464959 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:17:39 crc kubenswrapper[4730]: I0131 17:17:39.465030 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:17:39 crc kubenswrapper[4730]: I0131 17:17:39.465037 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:17:39 crc kubenswrapper[4730]: I0131 17:17:39.950698 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a"} Jan 31 17:17:39 crc kubenswrapper[4730]: I0131 17:17:39.951020 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90"} Jan 31 17:17:40 crc kubenswrapper[4730]: E0131 17:17:40.000264 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:17:40 crc kubenswrapper[4730]: I0131 17:17:40.978732 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" exitCode=1 Jan 31 17:17:40 crc kubenswrapper[4730]: I0131 17:17:40.979145 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" exitCode=1 Jan 31 17:17:40 crc kubenswrapper[4730]: I0131 17:17:40.979154 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" exitCode=1 Jan 31 17:17:40 crc kubenswrapper[4730]: I0131 17:17:40.979177 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a"} Jan 31 17:17:40 crc kubenswrapper[4730]: I0131 17:17:40.979206 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90"} Jan 31 17:17:40 crc kubenswrapper[4730]: I0131 17:17:40.979219 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297"} Jan 31 17:17:40 crc kubenswrapper[4730]: I0131 17:17:40.979237 4730 scope.go:117] "RemoveContainer" containerID="fedc5adc134766a473a8e030c945af274e3a390eb5d7ba11e25a5ce71f11af0d" Jan 31 17:17:40 crc kubenswrapper[4730]: I0131 17:17:40.980314 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:17:40 crc kubenswrapper[4730]: I0131 17:17:40.980390 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:17:40 crc kubenswrapper[4730]: I0131 17:17:40.980473 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:17:40 crc kubenswrapper[4730]: I0131 17:17:40.980481 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:17:40 crc kubenswrapper[4730]: E0131 17:17:40.981566 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:17:41 crc kubenswrapper[4730]: I0131 17:17:41.053215 4730 scope.go:117] "RemoveContainer" containerID="e28679c44a4d2d75c2f8df86cf3cefded8caf0bc76a38a7a21ce6d634dc55a54" Jan 31 17:17:41 crc kubenswrapper[4730]: I0131 17:17:41.104071 4730 scope.go:117] "RemoveContainer" containerID="66c05912497344303171d60d6e5eadc22724312fcaac2a9efd197276c202a5f2" Jan 31 17:17:41 crc kubenswrapper[4730]: I0131 17:17:41.997756 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:17:41 crc kubenswrapper[4730]: I0131 17:17:41.998163 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:17:41 crc kubenswrapper[4730]: I0131 17:17:41.998266 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:17:41 crc kubenswrapper[4730]: I0131 17:17:41.998276 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:17:41 crc kubenswrapper[4730]: E0131 17:17:41.998732 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:17:43 crc kubenswrapper[4730]: I0131 17:17:43.555197 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:17:43 crc kubenswrapper[4730]: E0131 17:17:43.555761 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 17:17:43 crc kubenswrapper[4730]: E0131 17:17:43.555833 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 17:19:45.555814085 +0000 UTC m=+2972.361871011 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 17:17:45 crc kubenswrapper[4730]: I0131 17:17:45.955325 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-d5xfm_3ba1ee3d-4cef-4fc3-8c31-5f544dd56244/control-plane-machine-set-operator/0.log" Jan 31 17:17:46 crc kubenswrapper[4730]: I0131 17:17:46.238928 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vk49s_1b2b6c9a-5a3c-4325-be55-3ba2718191ce/kube-rbac-proxy/0.log" Jan 31 17:17:46 crc kubenswrapper[4730]: I0131 17:17:46.272836 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vk49s_1b2b6c9a-5a3c-4325-be55-3ba2718191ce/machine-api-operator/0.log" Jan 31 17:17:48 crc kubenswrapper[4730]: I0131 17:17:48.465712 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:17:48 crc kubenswrapper[4730]: I0131 17:17:48.465938 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:17:48 crc kubenswrapper[4730]: E0131 17:17:48.466217 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:17:51 crc kubenswrapper[4730]: I0131 17:17:51.581394 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xt9pb"] Jan 31 17:17:51 crc kubenswrapper[4730]: E0131 17:17:51.582229 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36757278-3fc9-42d9-9d62-459a86336957" containerName="container-00" Jan 31 17:17:51 crc kubenswrapper[4730]: I0131 17:17:51.582240 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="36757278-3fc9-42d9-9d62-459a86336957" containerName="container-00" Jan 31 17:17:51 crc kubenswrapper[4730]: I0131 17:17:51.582443 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="36757278-3fc9-42d9-9d62-459a86336957" containerName="container-00" Jan 31 17:17:51 crc kubenswrapper[4730]: I0131 17:17:51.584766 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xt9pb" Jan 31 17:17:51 crc kubenswrapper[4730]: I0131 17:17:51.595648 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xt9pb"] Jan 31 17:17:51 crc kubenswrapper[4730]: I0131 17:17:51.627911 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea31e70-60d8-4414-93d5-e507013a9155-catalog-content\") pod \"redhat-marketplace-xt9pb\" (UID: \"eea31e70-60d8-4414-93d5-e507013a9155\") " pod="openshift-marketplace/redhat-marketplace-xt9pb" Jan 31 17:17:51 crc kubenswrapper[4730]: I0131 17:17:51.627978 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea31e70-60d8-4414-93d5-e507013a9155-utilities\") pod \"redhat-marketplace-xt9pb\" (UID: \"eea31e70-60d8-4414-93d5-e507013a9155\") " pod="openshift-marketplace/redhat-marketplace-xt9pb" Jan 31 17:17:51 crc kubenswrapper[4730]: I0131 17:17:51.628003 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmh8g\" (UniqueName: \"kubernetes.io/projected/eea31e70-60d8-4414-93d5-e507013a9155-kube-api-access-qmh8g\") pod \"redhat-marketplace-xt9pb\" (UID: \"eea31e70-60d8-4414-93d5-e507013a9155\") " pod="openshift-marketplace/redhat-marketplace-xt9pb" Jan 31 17:17:51 crc kubenswrapper[4730]: I0131 17:17:51.729200 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea31e70-60d8-4414-93d5-e507013a9155-catalog-content\") pod \"redhat-marketplace-xt9pb\" (UID: \"eea31e70-60d8-4414-93d5-e507013a9155\") " pod="openshift-marketplace/redhat-marketplace-xt9pb" Jan 31 17:17:51 crc kubenswrapper[4730]: I0131 17:17:51.729277 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea31e70-60d8-4414-93d5-e507013a9155-utilities\") pod \"redhat-marketplace-xt9pb\" (UID: \"eea31e70-60d8-4414-93d5-e507013a9155\") " pod="openshift-marketplace/redhat-marketplace-xt9pb" Jan 31 17:17:51 crc kubenswrapper[4730]: I0131 17:17:51.729301 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmh8g\" (UniqueName: \"kubernetes.io/projected/eea31e70-60d8-4414-93d5-e507013a9155-kube-api-access-qmh8g\") pod \"redhat-marketplace-xt9pb\" (UID: \"eea31e70-60d8-4414-93d5-e507013a9155\") " pod="openshift-marketplace/redhat-marketplace-xt9pb" Jan 31 17:17:51 crc kubenswrapper[4730]: I0131 17:17:51.729671 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea31e70-60d8-4414-93d5-e507013a9155-catalog-content\") pod \"redhat-marketplace-xt9pb\" (UID: \"eea31e70-60d8-4414-93d5-e507013a9155\") " pod="openshift-marketplace/redhat-marketplace-xt9pb" Jan 31 17:17:51 crc kubenswrapper[4730]: I0131 17:17:51.729886 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea31e70-60d8-4414-93d5-e507013a9155-utilities\") pod \"redhat-marketplace-xt9pb\" (UID: \"eea31e70-60d8-4414-93d5-e507013a9155\") " pod="openshift-marketplace/redhat-marketplace-xt9pb" Jan 31 17:17:51 crc kubenswrapper[4730]: I0131 17:17:51.765334 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmh8g\" (UniqueName: \"kubernetes.io/projected/eea31e70-60d8-4414-93d5-e507013a9155-kube-api-access-qmh8g\") pod \"redhat-marketplace-xt9pb\" (UID: \"eea31e70-60d8-4414-93d5-e507013a9155\") " pod="openshift-marketplace/redhat-marketplace-xt9pb" Jan 31 17:17:51 crc kubenswrapper[4730]: I0131 17:17:51.909657 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xt9pb" Jan 31 17:17:52 crc kubenswrapper[4730]: I0131 17:17:52.363238 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xt9pb"] Jan 31 17:17:52 crc kubenswrapper[4730]: I0131 17:17:52.466633 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:17:52 crc kubenswrapper[4730]: I0131 17:17:52.466972 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:17:52 crc kubenswrapper[4730]: I0131 17:17:52.467175 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:17:52 crc kubenswrapper[4730]: I0131 17:17:52.467283 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:17:52 crc kubenswrapper[4730]: E0131 17:17:52.467763 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:17:53 crc kubenswrapper[4730]: I0131 17:17:53.088099 4730 generic.go:334] "Generic (PLEG): container finished" podID="eea31e70-60d8-4414-93d5-e507013a9155" containerID="b8fc78cc52069f7b015e49c285bb0365df74225bda423ea15305edd35dc15456" exitCode=0 Jan 31 17:17:53 crc kubenswrapper[4730]: I0131 17:17:53.088397 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xt9pb" event={"ID":"eea31e70-60d8-4414-93d5-e507013a9155","Type":"ContainerDied","Data":"b8fc78cc52069f7b015e49c285bb0365df74225bda423ea15305edd35dc15456"} Jan 31 17:17:53 crc kubenswrapper[4730]: I0131 17:17:53.088447 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xt9pb" event={"ID":"eea31e70-60d8-4414-93d5-e507013a9155","Type":"ContainerStarted","Data":"5a017b8a25bf3bd46d661ff380404f0212bcc947d01384f73d6f40dbf9f0cc31"} Jan 31 17:17:54 crc kubenswrapper[4730]: I0131 17:17:54.098153 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xt9pb" event={"ID":"eea31e70-60d8-4414-93d5-e507013a9155","Type":"ContainerStarted","Data":"5e3bb97e8bf6a756b30d8e9f680b27d5a439b0a3503e5447347fc3438b6e9cf2"} Jan 31 17:17:55 crc kubenswrapper[4730]: I0131 17:17:55.106564 4730 generic.go:334] "Generic (PLEG): container finished" podID="eea31e70-60d8-4414-93d5-e507013a9155" containerID="5e3bb97e8bf6a756b30d8e9f680b27d5a439b0a3503e5447347fc3438b6e9cf2" exitCode=0 Jan 31 17:17:55 crc kubenswrapper[4730]: I0131 17:17:55.106659 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xt9pb" event={"ID":"eea31e70-60d8-4414-93d5-e507013a9155","Type":"ContainerDied","Data":"5e3bb97e8bf6a756b30d8e9f680b27d5a439b0a3503e5447347fc3438b6e9cf2"} Jan 31 17:17:56 crc kubenswrapper[4730]: I0131 17:17:56.116957 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xt9pb" event={"ID":"eea31e70-60d8-4414-93d5-e507013a9155","Type":"ContainerStarted","Data":"33c4fe017acdaa5069f64b2d19507c6c52272d373e9723230f5b932820221948"} Jan 31 17:17:56 crc kubenswrapper[4730]: I0131 17:17:56.152665 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xt9pb" podStartSLOduration=2.75806366 podStartE2EDuration="5.152645566s" podCreationTimestamp="2026-01-31 17:17:51 +0000 UTC" firstStartedPulling="2026-01-31 17:17:53.090379624 +0000 UTC m=+2859.896436560" lastFinishedPulling="2026-01-31 17:17:55.48496154 +0000 UTC m=+2862.291018466" observedRunningTime="2026-01-31 17:17:56.144521928 +0000 UTC m=+2862.950578864" watchObservedRunningTime="2026-01-31 17:17:56.152645566 +0000 UTC m=+2862.958702482" Jan 31 17:17:56 crc kubenswrapper[4730]: I0131 17:17:56.975154 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 17:17:56 crc kubenswrapper[4730]: I0131 17:17:56.975213 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 17:17:59 crc kubenswrapper[4730]: E0131 17:17:59.109600 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 17:17:59 crc kubenswrapper[4730]: I0131 17:17:59.141304 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:17:59 crc kubenswrapper[4730]: I0131 17:17:59.618095 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-h45ph_09f7f15d-b5e1-45b1-9f93-9bbd68805051/cert-manager-controller/0.log" Jan 31 17:17:59 crc kubenswrapper[4730]: I0131 17:17:59.798742 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-9lhsp_fd8a2a6c-ec68-4905-a135-ee167753b731/cert-manager-cainjector/0.log" Jan 31 17:17:59 crc kubenswrapper[4730]: I0131 17:17:59.824837 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-fx65b_4208ba55-ea8a-4d6d-9618-8afcbf1216a2/cert-manager-webhook/0.log" Jan 31 17:18:01 crc kubenswrapper[4730]: I0131 17:18:01.909722 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xt9pb" Jan 31 17:18:01 crc kubenswrapper[4730]: I0131 17:18:01.910092 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xt9pb" Jan 31 17:18:01 crc kubenswrapper[4730]: I0131 17:18:01.956080 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xt9pb" Jan 31 17:18:02 crc kubenswrapper[4730]: I0131 17:18:02.214349 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xt9pb" Jan 31 17:18:02 crc kubenswrapper[4730]: I0131 17:18:02.261879 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xt9pb"] Jan 31 17:18:03 crc kubenswrapper[4730]: I0131 17:18:03.464846 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:18:03 crc kubenswrapper[4730]: I0131 17:18:03.465128 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:18:03 crc kubenswrapper[4730]: E0131 17:18:03.465482 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:18:04 crc kubenswrapper[4730]: I0131 17:18:04.179007 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xt9pb" podUID="eea31e70-60d8-4414-93d5-e507013a9155" containerName="registry-server" containerID="cri-o://33c4fe017acdaa5069f64b2d19507c6c52272d373e9723230f5b932820221948" gracePeriod=2 Jan 31 17:18:04 crc kubenswrapper[4730]: I0131 17:18:04.657622 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xt9pb" Jan 31 17:18:04 crc kubenswrapper[4730]: I0131 17:18:04.787757 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmh8g\" (UniqueName: \"kubernetes.io/projected/eea31e70-60d8-4414-93d5-e507013a9155-kube-api-access-qmh8g\") pod \"eea31e70-60d8-4414-93d5-e507013a9155\" (UID: \"eea31e70-60d8-4414-93d5-e507013a9155\") " Jan 31 17:18:04 crc kubenswrapper[4730]: I0131 17:18:04.788363 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea31e70-60d8-4414-93d5-e507013a9155-utilities\") pod \"eea31e70-60d8-4414-93d5-e507013a9155\" (UID: \"eea31e70-60d8-4414-93d5-e507013a9155\") " Jan 31 17:18:04 crc kubenswrapper[4730]: I0131 17:18:04.788476 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea31e70-60d8-4414-93d5-e507013a9155-catalog-content\") pod \"eea31e70-60d8-4414-93d5-e507013a9155\" (UID: \"eea31e70-60d8-4414-93d5-e507013a9155\") " Jan 31 17:18:04 crc kubenswrapper[4730]: I0131 17:18:04.789358 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eea31e70-60d8-4414-93d5-e507013a9155-utilities" (OuterVolumeSpecName: "utilities") pod "eea31e70-60d8-4414-93d5-e507013a9155" (UID: "eea31e70-60d8-4414-93d5-e507013a9155"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 17:18:04 crc kubenswrapper[4730]: I0131 17:18:04.794002 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea31e70-60d8-4414-93d5-e507013a9155-kube-api-access-qmh8g" (OuterVolumeSpecName: "kube-api-access-qmh8g") pod "eea31e70-60d8-4414-93d5-e507013a9155" (UID: "eea31e70-60d8-4414-93d5-e507013a9155"). InnerVolumeSpecName "kube-api-access-qmh8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 17:18:04 crc kubenswrapper[4730]: I0131 17:18:04.807658 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eea31e70-60d8-4414-93d5-e507013a9155-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eea31e70-60d8-4414-93d5-e507013a9155" (UID: "eea31e70-60d8-4414-93d5-e507013a9155"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 17:18:04 crc kubenswrapper[4730]: I0131 17:18:04.891292 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea31e70-60d8-4414-93d5-e507013a9155-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 17:18:04 crc kubenswrapper[4730]: I0131 17:18:04.891538 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea31e70-60d8-4414-93d5-e507013a9155-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 17:18:04 crc kubenswrapper[4730]: I0131 17:18:04.891658 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmh8g\" (UniqueName: \"kubernetes.io/projected/eea31e70-60d8-4414-93d5-e507013a9155-kube-api-access-qmh8g\") on node \"crc\" DevicePath \"\"" Jan 31 17:18:05 crc kubenswrapper[4730]: I0131 17:18:05.191024 4730 generic.go:334] "Generic (PLEG): container finished" podID="eea31e70-60d8-4414-93d5-e507013a9155" containerID="33c4fe017acdaa5069f64b2d19507c6c52272d373e9723230f5b932820221948" exitCode=0 Jan 31 17:18:05 crc kubenswrapper[4730]: I0131 17:18:05.191498 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xt9pb" Jan 31 17:18:05 crc kubenswrapper[4730]: I0131 17:18:05.191518 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xt9pb" event={"ID":"eea31e70-60d8-4414-93d5-e507013a9155","Type":"ContainerDied","Data":"33c4fe017acdaa5069f64b2d19507c6c52272d373e9723230f5b932820221948"} Jan 31 17:18:05 crc kubenswrapper[4730]: I0131 17:18:05.193275 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xt9pb" event={"ID":"eea31e70-60d8-4414-93d5-e507013a9155","Type":"ContainerDied","Data":"5a017b8a25bf3bd46d661ff380404f0212bcc947d01384f73d6f40dbf9f0cc31"} Jan 31 17:18:05 crc kubenswrapper[4730]: I0131 17:18:05.193298 4730 scope.go:117] "RemoveContainer" containerID="33c4fe017acdaa5069f64b2d19507c6c52272d373e9723230f5b932820221948" Jan 31 17:18:05 crc kubenswrapper[4730]: I0131 17:18:05.223081 4730 scope.go:117] "RemoveContainer" containerID="5e3bb97e8bf6a756b30d8e9f680b27d5a439b0a3503e5447347fc3438b6e9cf2" Jan 31 17:18:05 crc kubenswrapper[4730]: I0131 17:18:05.256315 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xt9pb"] Jan 31 17:18:05 crc kubenswrapper[4730]: I0131 17:18:05.266040 4730 scope.go:117] "RemoveContainer" containerID="b8fc78cc52069f7b015e49c285bb0365df74225bda423ea15305edd35dc15456" Jan 31 17:18:05 crc kubenswrapper[4730]: I0131 17:18:05.266412 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xt9pb"] Jan 31 17:18:05 crc kubenswrapper[4730]: I0131 17:18:05.300157 4730 scope.go:117] "RemoveContainer" containerID="33c4fe017acdaa5069f64b2d19507c6c52272d373e9723230f5b932820221948" Jan 31 17:18:05 crc kubenswrapper[4730]: E0131 17:18:05.300698 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33c4fe017acdaa5069f64b2d19507c6c52272d373e9723230f5b932820221948\": container with ID starting with 33c4fe017acdaa5069f64b2d19507c6c52272d373e9723230f5b932820221948 not found: ID does not exist" containerID="33c4fe017acdaa5069f64b2d19507c6c52272d373e9723230f5b932820221948" Jan 31 17:18:05 crc kubenswrapper[4730]: I0131 17:18:05.300751 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33c4fe017acdaa5069f64b2d19507c6c52272d373e9723230f5b932820221948"} err="failed to get container status \"33c4fe017acdaa5069f64b2d19507c6c52272d373e9723230f5b932820221948\": rpc error: code = NotFound desc = could not find container \"33c4fe017acdaa5069f64b2d19507c6c52272d373e9723230f5b932820221948\": container with ID starting with 33c4fe017acdaa5069f64b2d19507c6c52272d373e9723230f5b932820221948 not found: ID does not exist" Jan 31 17:18:05 crc kubenswrapper[4730]: I0131 17:18:05.300783 4730 scope.go:117] "RemoveContainer" containerID="5e3bb97e8bf6a756b30d8e9f680b27d5a439b0a3503e5447347fc3438b6e9cf2" Jan 31 17:18:05 crc kubenswrapper[4730]: E0131 17:18:05.301108 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e3bb97e8bf6a756b30d8e9f680b27d5a439b0a3503e5447347fc3438b6e9cf2\": container with ID starting with 5e3bb97e8bf6a756b30d8e9f680b27d5a439b0a3503e5447347fc3438b6e9cf2 not found: ID does not exist" containerID="5e3bb97e8bf6a756b30d8e9f680b27d5a439b0a3503e5447347fc3438b6e9cf2" Jan 31 17:18:05 crc kubenswrapper[4730]: I0131 17:18:05.301149 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e3bb97e8bf6a756b30d8e9f680b27d5a439b0a3503e5447347fc3438b6e9cf2"} err="failed to get container status \"5e3bb97e8bf6a756b30d8e9f680b27d5a439b0a3503e5447347fc3438b6e9cf2\": rpc error: code = NotFound desc = could not find container \"5e3bb97e8bf6a756b30d8e9f680b27d5a439b0a3503e5447347fc3438b6e9cf2\": container with ID starting with 5e3bb97e8bf6a756b30d8e9f680b27d5a439b0a3503e5447347fc3438b6e9cf2 not found: ID does not exist" Jan 31 17:18:05 crc kubenswrapper[4730]: I0131 17:18:05.301170 4730 scope.go:117] "RemoveContainer" containerID="b8fc78cc52069f7b015e49c285bb0365df74225bda423ea15305edd35dc15456" Jan 31 17:18:05 crc kubenswrapper[4730]: E0131 17:18:05.301427 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8fc78cc52069f7b015e49c285bb0365df74225bda423ea15305edd35dc15456\": container with ID starting with b8fc78cc52069f7b015e49c285bb0365df74225bda423ea15305edd35dc15456 not found: ID does not exist" containerID="b8fc78cc52069f7b015e49c285bb0365df74225bda423ea15305edd35dc15456" Jan 31 17:18:05 crc kubenswrapper[4730]: I0131 17:18:05.301465 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8fc78cc52069f7b015e49c285bb0365df74225bda423ea15305edd35dc15456"} err="failed to get container status \"b8fc78cc52069f7b015e49c285bb0365df74225bda423ea15305edd35dc15456\": rpc error: code = NotFound desc = could not find container \"b8fc78cc52069f7b015e49c285bb0365df74225bda423ea15305edd35dc15456\": container with ID starting with b8fc78cc52069f7b015e49c285bb0365df74225bda423ea15305edd35dc15456 not found: ID does not exist" Jan 31 17:18:06 crc kubenswrapper[4730]: I0131 17:18:06.466968 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:18:06 crc kubenswrapper[4730]: I0131 17:18:06.468291 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:18:06 crc kubenswrapper[4730]: I0131 17:18:06.468595 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:18:06 crc kubenswrapper[4730]: I0131 17:18:06.468729 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:18:06 crc kubenswrapper[4730]: E0131 17:18:06.470027 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:18:06 crc kubenswrapper[4730]: I0131 17:18:06.481903 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea31e70-60d8-4414-93d5-e507013a9155" path="/var/lib/kubelet/pods/eea31e70-60d8-4414-93d5-e507013a9155/volumes" Jan 31 17:18:14 crc kubenswrapper[4730]: I0131 17:18:14.876032 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-p6fl8_485682da-cdf9-4bb1-ad07-06ed4ac7ff92/nmstate-console-plugin/0.log" Jan 31 17:18:15 crc kubenswrapper[4730]: I0131 17:18:15.018070 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-fjff6_2126b9cb-bf66-467f-8f34-400ea7d780ee/nmstate-handler/0.log" Jan 31 17:18:15 crc kubenswrapper[4730]: I0131 17:18:15.088638 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-sk79b_1e9f7b4c-83b7-465f-b684-8131c5e63277/kube-rbac-proxy/0.log" Jan 31 17:18:15 crc kubenswrapper[4730]: I0131 17:18:15.203668 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-sk79b_1e9f7b4c-83b7-465f-b684-8131c5e63277/nmstate-metrics/0.log" Jan 31 17:18:15 crc kubenswrapper[4730]: I0131 17:18:15.286680 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-4zz5r_7545d2e0-52ef-41a7-a0be-3c97df2f4fd8/nmstate-operator/0.log" Jan 31 17:18:15 crc kubenswrapper[4730]: I0131 17:18:15.376376 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-lh2fv_487cafab-d04e-41a9-8f02-fde62acc89d9/nmstate-webhook/0.log" Jan 31 17:18:16 crc kubenswrapper[4730]: I0131 17:18:16.466592 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:18:16 crc kubenswrapper[4730]: I0131 17:18:16.466625 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:18:16 crc kubenswrapper[4730]: E0131 17:18:16.466863 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:18:19 crc kubenswrapper[4730]: I0131 17:18:19.464934 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:18:19 crc kubenswrapper[4730]: I0131 17:18:19.465448 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:18:19 crc kubenswrapper[4730]: I0131 17:18:19.465523 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:18:19 crc kubenswrapper[4730]: I0131 17:18:19.465529 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:18:19 crc kubenswrapper[4730]: E0131 17:18:19.465810 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:18:26 crc kubenswrapper[4730]: I0131 17:18:26.974760 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 17:18:26 crc kubenswrapper[4730]: I0131 17:18:26.976218 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 17:18:27 crc kubenswrapper[4730]: I0131 17:18:27.464761 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:18:27 crc kubenswrapper[4730]: I0131 17:18:27.464839 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:18:27 crc kubenswrapper[4730]: E0131 17:18:27.465291 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:18:30 crc kubenswrapper[4730]: I0131 17:18:30.464845 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:18:30 crc kubenswrapper[4730]: I0131 17:18:30.465112 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:18:30 crc kubenswrapper[4730]: I0131 17:18:30.465184 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:18:30 crc kubenswrapper[4730]: I0131 17:18:30.465192 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:18:30 crc kubenswrapper[4730]: E0131 17:18:30.465470 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:18:39 crc kubenswrapper[4730]: I0131 17:18:39.464722 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:18:39 crc kubenswrapper[4730]: I0131 17:18:39.465489 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:18:39 crc kubenswrapper[4730]: E0131 17:18:39.465751 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:18:43 crc kubenswrapper[4730]: I0131 17:18:43.758511 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vczf7"] Jan 31 17:18:43 crc kubenswrapper[4730]: E0131 17:18:43.759228 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea31e70-60d8-4414-93d5-e507013a9155" containerName="extract-utilities" Jan 31 17:18:43 crc kubenswrapper[4730]: I0131 17:18:43.759240 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea31e70-60d8-4414-93d5-e507013a9155" containerName="extract-utilities" Jan 31 17:18:43 crc kubenswrapper[4730]: E0131 17:18:43.759257 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea31e70-60d8-4414-93d5-e507013a9155" containerName="extract-content" Jan 31 17:18:43 crc kubenswrapper[4730]: I0131 17:18:43.759263 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea31e70-60d8-4414-93d5-e507013a9155" containerName="extract-content" Jan 31 17:18:43 crc kubenswrapper[4730]: E0131 17:18:43.759288 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea31e70-60d8-4414-93d5-e507013a9155" containerName="registry-server" Jan 31 17:18:43 crc kubenswrapper[4730]: I0131 17:18:43.759294 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea31e70-60d8-4414-93d5-e507013a9155" containerName="registry-server" Jan 31 17:18:43 crc kubenswrapper[4730]: I0131 17:18:43.759502 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea31e70-60d8-4414-93d5-e507013a9155" containerName="registry-server" Jan 31 17:18:43 crc kubenswrapper[4730]: I0131 17:18:43.760698 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vczf7" Jan 31 17:18:43 crc kubenswrapper[4730]: I0131 17:18:43.786164 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vczf7"] Jan 31 17:18:43 crc kubenswrapper[4730]: I0131 17:18:43.910292 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f32f08aa-0df5-4400-8e4b-4d8e2346f792-utilities\") pod \"redhat-operators-vczf7\" (UID: \"f32f08aa-0df5-4400-8e4b-4d8e2346f792\") " pod="openshift-marketplace/redhat-operators-vczf7" Jan 31 17:18:43 crc kubenswrapper[4730]: I0131 17:18:43.910640 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s6sm\" (UniqueName: \"kubernetes.io/projected/f32f08aa-0df5-4400-8e4b-4d8e2346f792-kube-api-access-4s6sm\") pod \"redhat-operators-vczf7\" (UID: \"f32f08aa-0df5-4400-8e4b-4d8e2346f792\") " pod="openshift-marketplace/redhat-operators-vczf7" Jan 31 17:18:43 crc kubenswrapper[4730]: I0131 17:18:43.910662 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f32f08aa-0df5-4400-8e4b-4d8e2346f792-catalog-content\") pod \"redhat-operators-vczf7\" (UID: \"f32f08aa-0df5-4400-8e4b-4d8e2346f792\") " pod="openshift-marketplace/redhat-operators-vczf7" Jan 31 17:18:44 crc kubenswrapper[4730]: I0131 17:18:44.012180 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f32f08aa-0df5-4400-8e4b-4d8e2346f792-utilities\") pod \"redhat-operators-vczf7\" (UID: \"f32f08aa-0df5-4400-8e4b-4d8e2346f792\") " pod="openshift-marketplace/redhat-operators-vczf7" Jan 31 17:18:44 crc kubenswrapper[4730]: I0131 17:18:44.012240 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s6sm\" (UniqueName: \"kubernetes.io/projected/f32f08aa-0df5-4400-8e4b-4d8e2346f792-kube-api-access-4s6sm\") pod \"redhat-operators-vczf7\" (UID: \"f32f08aa-0df5-4400-8e4b-4d8e2346f792\") " pod="openshift-marketplace/redhat-operators-vczf7" Jan 31 17:18:44 crc kubenswrapper[4730]: I0131 17:18:44.012259 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f32f08aa-0df5-4400-8e4b-4d8e2346f792-catalog-content\") pod \"redhat-operators-vczf7\" (UID: \"f32f08aa-0df5-4400-8e4b-4d8e2346f792\") " pod="openshift-marketplace/redhat-operators-vczf7" Jan 31 17:18:44 crc kubenswrapper[4730]: I0131 17:18:44.013008 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f32f08aa-0df5-4400-8e4b-4d8e2346f792-catalog-content\") pod \"redhat-operators-vczf7\" (UID: \"f32f08aa-0df5-4400-8e4b-4d8e2346f792\") " pod="openshift-marketplace/redhat-operators-vczf7" Jan 31 17:18:44 crc kubenswrapper[4730]: I0131 17:18:44.013021 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f32f08aa-0df5-4400-8e4b-4d8e2346f792-utilities\") pod \"redhat-operators-vczf7\" (UID: \"f32f08aa-0df5-4400-8e4b-4d8e2346f792\") " pod="openshift-marketplace/redhat-operators-vczf7" Jan 31 17:18:44 crc kubenswrapper[4730]: I0131 17:18:44.055957 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s6sm\" (UniqueName: \"kubernetes.io/projected/f32f08aa-0df5-4400-8e4b-4d8e2346f792-kube-api-access-4s6sm\") pod \"redhat-operators-vczf7\" (UID: \"f32f08aa-0df5-4400-8e4b-4d8e2346f792\") " pod="openshift-marketplace/redhat-operators-vczf7" Jan 31 17:18:44 crc kubenswrapper[4730]: I0131 17:18:44.077350 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vczf7" Jan 31 17:18:44 crc kubenswrapper[4730]: I0131 17:18:44.466765 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:18:44 crc kubenswrapper[4730]: I0131 17:18:44.467134 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:18:44 crc kubenswrapper[4730]: I0131 17:18:44.467213 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:18:44 crc kubenswrapper[4730]: I0131 17:18:44.467220 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:18:44 crc kubenswrapper[4730]: E0131 17:18:44.467573 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:18:44 crc kubenswrapper[4730]: I0131 17:18:44.733539 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vczf7"] Jan 31 17:18:45 crc kubenswrapper[4730]: I0131 17:18:45.543675 4730 generic.go:334] "Generic (PLEG): container finished" podID="f32f08aa-0df5-4400-8e4b-4d8e2346f792" containerID="cbbd23e971ae9475eaa4d3db4c73c39319c05c1a1923a7144f8df4f9583e63ca" exitCode=0 Jan 31 17:18:45 crc kubenswrapper[4730]: I0131 17:18:45.543771 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vczf7" event={"ID":"f32f08aa-0df5-4400-8e4b-4d8e2346f792","Type":"ContainerDied","Data":"cbbd23e971ae9475eaa4d3db4c73c39319c05c1a1923a7144f8df4f9583e63ca"} Jan 31 17:18:45 crc kubenswrapper[4730]: I0131 17:18:45.543954 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vczf7" event={"ID":"f32f08aa-0df5-4400-8e4b-4d8e2346f792","Type":"ContainerStarted","Data":"97acb5d6ee036d4ce00103589070a2867cad6d3e57d7d2d81698dcf5332f69db"} Jan 31 17:18:46 crc kubenswrapper[4730]: I0131 17:18:46.553563 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vczf7" event={"ID":"f32f08aa-0df5-4400-8e4b-4d8e2346f792","Type":"ContainerStarted","Data":"fa0c723aa1ba0ea7020bf5aadbb9db1ef8631bc94156e7ef84840074d8ea778d"} Jan 31 17:18:46 crc kubenswrapper[4730]: I0131 17:18:46.736669 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-zv6nq_08181da5-8c97-4a4a-bfaf-f0f300cacf5b/kube-rbac-proxy/0.log" Jan 31 17:18:46 crc kubenswrapper[4730]: I0131 17:18:46.890669 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-zv6nq_08181da5-8c97-4a4a-bfaf-f0f300cacf5b/controller/0.log" Jan 31 17:18:47 crc kubenswrapper[4730]: I0131 17:18:47.126151 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/cp-frr-files/0.log" Jan 31 17:18:47 crc kubenswrapper[4730]: I0131 17:18:47.350879 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/cp-reloader/0.log" Jan 31 17:18:47 crc kubenswrapper[4730]: I0131 17:18:47.381259 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/cp-reloader/0.log" Jan 31 17:18:47 crc kubenswrapper[4730]: I0131 17:18:47.397305 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/cp-metrics/0.log" Jan 31 17:18:47 crc kubenswrapper[4730]: I0131 17:18:47.430683 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/cp-frr-files/0.log" Jan 31 17:18:47 crc kubenswrapper[4730]: I0131 17:18:47.625478 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/cp-frr-files/0.log" Jan 31 17:18:47 crc kubenswrapper[4730]: I0131 17:18:47.667995 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/cp-metrics/0.log" Jan 31 17:18:47 crc kubenswrapper[4730]: I0131 17:18:47.719385 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/cp-metrics/0.log" Jan 31 17:18:47 crc kubenswrapper[4730]: I0131 17:18:47.751538 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/cp-reloader/0.log" Jan 31 17:18:47 crc kubenswrapper[4730]: I0131 17:18:47.905665 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/cp-frr-files/0.log" Jan 31 17:18:47 crc kubenswrapper[4730]: I0131 17:18:47.931036 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/cp-metrics/0.log" Jan 31 17:18:47 crc kubenswrapper[4730]: I0131 17:18:47.986901 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/cp-reloader/0.log" Jan 31 17:18:48 crc kubenswrapper[4730]: I0131 17:18:48.013106 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/controller/0.log" Jan 31 17:18:48 crc kubenswrapper[4730]: I0131 17:18:48.149081 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/frr-metrics/0.log" Jan 31 17:18:48 crc kubenswrapper[4730]: I0131 17:18:48.276148 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/kube-rbac-proxy/0.log" Jan 31 17:18:48 crc kubenswrapper[4730]: I0131 17:18:48.309102 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/kube-rbac-proxy-frr/0.log" Jan 31 17:18:48 crc kubenswrapper[4730]: I0131 17:18:48.960349 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/reloader/0.log" Jan 31 17:18:49 crc kubenswrapper[4730]: I0131 17:18:49.035200 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8lbph_129f61a1-e50c-4f81-a931-d9924c771c4f/frr-k8s-webhook-server/0.log" Jan 31 17:18:49 crc kubenswrapper[4730]: I0131 17:18:49.156672 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b2bpp_6b47a859-3bb1-4179-9cc2-8274173a22d4/frr/0.log" Jan 31 17:18:49 crc kubenswrapper[4730]: I0131 17:18:49.360692 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-56c885bfd6-vqnrh_59226704-24cc-4677-bb59-408503c70795/manager/0.log" Jan 31 17:18:49 crc kubenswrapper[4730]: I0131 17:18:49.462505 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-545856c6bc-fnppt_430bc339-5bd3-4873-94e9-229d6861a1ba/webhook-server/0.log" Jan 31 17:18:49 crc kubenswrapper[4730]: I0131 17:18:49.624093 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xxzrr_da3276e9-6b00-45e8-8db5-6bfc6f7f276f/kube-rbac-proxy/0.log" Jan 31 17:18:49 crc kubenswrapper[4730]: I0131 17:18:49.925693 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xxzrr_da3276e9-6b00-45e8-8db5-6bfc6f7f276f/speaker/0.log" Jan 31 17:18:52 crc kubenswrapper[4730]: I0131 17:18:52.464582 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:18:52 crc kubenswrapper[4730]: I0131 17:18:52.464864 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:18:52 crc kubenswrapper[4730]: E0131 17:18:52.465129 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:18:52 crc kubenswrapper[4730]: I0131 17:18:52.601264 4730 generic.go:334] "Generic (PLEG): container finished" podID="f32f08aa-0df5-4400-8e4b-4d8e2346f792" containerID="fa0c723aa1ba0ea7020bf5aadbb9db1ef8631bc94156e7ef84840074d8ea778d" exitCode=0 Jan 31 17:18:52 crc kubenswrapper[4730]: I0131 17:18:52.601308 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vczf7" event={"ID":"f32f08aa-0df5-4400-8e4b-4d8e2346f792","Type":"ContainerDied","Data":"fa0c723aa1ba0ea7020bf5aadbb9db1ef8631bc94156e7ef84840074d8ea778d"} Jan 31 17:18:53 crc kubenswrapper[4730]: I0131 17:18:53.609949 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vczf7" event={"ID":"f32f08aa-0df5-4400-8e4b-4d8e2346f792","Type":"ContainerStarted","Data":"55ce49fe370625b77cc928deaad3831f98a191414ee861f83bd748c4a7a7ded6"} Jan 31 17:18:53 crc kubenswrapper[4730]: I0131 17:18:53.627953 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vczf7" podStartSLOduration=3.169078343 podStartE2EDuration="10.627937029s" podCreationTimestamp="2026-01-31 17:18:43 +0000 UTC" firstStartedPulling="2026-01-31 17:18:45.545082743 +0000 UTC m=+2912.351139659" lastFinishedPulling="2026-01-31 17:18:53.003941429 +0000 UTC m=+2919.809998345" observedRunningTime="2026-01-31 17:18:53.624726409 +0000 UTC m=+2920.430783325" watchObservedRunningTime="2026-01-31 17:18:53.627937029 +0000 UTC m=+2920.433993945" Jan 31 17:18:54 crc kubenswrapper[4730]: I0131 17:18:54.077586 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vczf7" Jan 31 17:18:54 crc kubenswrapper[4730]: I0131 17:18:54.077815 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vczf7" Jan 31 17:18:55 crc kubenswrapper[4730]: I0131 17:18:55.123418 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vczf7" podUID="f32f08aa-0df5-4400-8e4b-4d8e2346f792" containerName="registry-server" probeResult="failure" output=< Jan 31 17:18:55 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Jan 31 17:18:55 crc kubenswrapper[4730]: > Jan 31 17:18:56 crc kubenswrapper[4730]: I0131 17:18:56.975097 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 17:18:56 crc kubenswrapper[4730]: I0131 17:18:56.975164 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 17:18:56 crc kubenswrapper[4730]: I0131 17:18:56.975241 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 17:18:56 crc kubenswrapper[4730]: I0131 17:18:56.975963 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8f0b779e1030f9cbd3ff463a2fefa2b4f4a055fd00a384af88e6f8249382c9c3"} pod="openshift-machine-config-operator/machine-config-daemon-mzg47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 17:18:56 crc kubenswrapper[4730]: I0131 17:18:56.976030 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" containerID="cri-o://8f0b779e1030f9cbd3ff463a2fefa2b4f4a055fd00a384af88e6f8249382c9c3" gracePeriod=600 Jan 31 17:18:57 crc kubenswrapper[4730]: I0131 17:18:57.638146 4730 generic.go:334] "Generic (PLEG): container finished" podID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerID="8f0b779e1030f9cbd3ff463a2fefa2b4f4a055fd00a384af88e6f8249382c9c3" exitCode=0 Jan 31 17:18:57 crc kubenswrapper[4730]: I0131 17:18:57.638209 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerDied","Data":"8f0b779e1030f9cbd3ff463a2fefa2b4f4a055fd00a384af88e6f8249382c9c3"} Jan 31 17:18:57 crc kubenswrapper[4730]: I0131 17:18:57.638490 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerStarted","Data":"76687ee13b5143ed454a14a2de3825fe6e5a14d76c7ed16820dc4bdf24c6a6f8"} Jan 31 17:18:57 crc kubenswrapper[4730]: I0131 17:18:57.638523 4730 scope.go:117] "RemoveContainer" containerID="f87dce6c6c91f4fcb19a3f0b956e4dd8da44d6fcf718bf2d1a80476ab9159edf" Jan 31 17:18:58 crc kubenswrapper[4730]: I0131 17:18:58.464879 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:18:58 crc kubenswrapper[4730]: I0131 17:18:58.465211 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:18:58 crc kubenswrapper[4730]: I0131 17:18:58.465298 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:18:58 crc kubenswrapper[4730]: I0131 17:18:58.465306 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:18:58 crc kubenswrapper[4730]: E0131 17:18:58.465647 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:19:03 crc kubenswrapper[4730]: I0131 17:19:03.993341 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h_702064e1-dbb1-4b48-a075-2dc133933618/util/0.log" Jan 31 17:19:04 crc kubenswrapper[4730]: I0131 17:19:04.126387 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vczf7" Jan 31 17:19:04 crc kubenswrapper[4730]: I0131 17:19:04.143329 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h_702064e1-dbb1-4b48-a075-2dc133933618/util/0.log" Jan 31 17:19:04 crc kubenswrapper[4730]: I0131 17:19:04.177824 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vczf7" Jan 31 17:19:04 crc kubenswrapper[4730]: I0131 17:19:04.219779 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h_702064e1-dbb1-4b48-a075-2dc133933618/pull/0.log" Jan 31 17:19:04 crc kubenswrapper[4730]: I0131 17:19:04.239484 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h_702064e1-dbb1-4b48-a075-2dc133933618/pull/0.log" Jan 31 17:19:04 crc kubenswrapper[4730]: I0131 17:19:04.363638 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vczf7"] Jan 31 17:19:04 crc kubenswrapper[4730]: I0131 17:19:04.415374 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h_702064e1-dbb1-4b48-a075-2dc133933618/pull/0.log" Jan 31 17:19:04 crc kubenswrapper[4730]: I0131 17:19:04.488846 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h_702064e1-dbb1-4b48-a075-2dc133933618/util/0.log" Jan 31 17:19:04 crc kubenswrapper[4730]: I0131 17:19:04.516764 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ds8h_702064e1-dbb1-4b48-a075-2dc133933618/extract/0.log" Jan 31 17:19:04 crc kubenswrapper[4730]: I0131 17:19:04.655897 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt_92c0884c-e6df-47ef-9f9b-5b185db8ea98/util/0.log" Jan 31 17:19:04 crc kubenswrapper[4730]: I0131 17:19:04.874198 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt_92c0884c-e6df-47ef-9f9b-5b185db8ea98/pull/0.log" Jan 31 17:19:04 crc kubenswrapper[4730]: I0131 17:19:04.910431 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt_92c0884c-e6df-47ef-9f9b-5b185db8ea98/util/0.log" Jan 31 17:19:04 crc kubenswrapper[4730]: I0131 17:19:04.911570 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt_92c0884c-e6df-47ef-9f9b-5b185db8ea98/pull/0.log" Jan 31 17:19:05 crc kubenswrapper[4730]: I0131 17:19:05.099154 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt_92c0884c-e6df-47ef-9f9b-5b185db8ea98/pull/0.log" Jan 31 17:19:05 crc kubenswrapper[4730]: I0131 17:19:05.113382 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt_92c0884c-e6df-47ef-9f9b-5b185db8ea98/util/0.log" Jan 31 17:19:05 crc kubenswrapper[4730]: I0131 17:19:05.138057 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jcsjt_92c0884c-e6df-47ef-9f9b-5b185db8ea98/extract/0.log" Jan 31 17:19:05 crc kubenswrapper[4730]: I0131 17:19:05.326011 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wkn2d_2d741dd8-c85c-4a72-af3f-684820db766f/extract-utilities/0.log" Jan 31 17:19:05 crc kubenswrapper[4730]: I0131 17:19:05.459038 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wkn2d_2d741dd8-c85c-4a72-af3f-684820db766f/extract-utilities/0.log" Jan 31 17:19:05 crc kubenswrapper[4730]: I0131 17:19:05.473480 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wkn2d_2d741dd8-c85c-4a72-af3f-684820db766f/extract-content/0.log" Jan 31 17:19:05 crc kubenswrapper[4730]: I0131 17:19:05.487144 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wkn2d_2d741dd8-c85c-4a72-af3f-684820db766f/extract-content/0.log" Jan 31 17:19:05 crc kubenswrapper[4730]: I0131 17:19:05.666590 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wkn2d_2d741dd8-c85c-4a72-af3f-684820db766f/extract-utilities/0.log" Jan 31 17:19:05 crc kubenswrapper[4730]: I0131 17:19:05.690367 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wkn2d_2d741dd8-c85c-4a72-af3f-684820db766f/extract-content/0.log" Jan 31 17:19:05 crc kubenswrapper[4730]: I0131 17:19:05.710134 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vczf7" podUID="f32f08aa-0df5-4400-8e4b-4d8e2346f792" containerName="registry-server" containerID="cri-o://55ce49fe370625b77cc928deaad3831f98a191414ee861f83bd748c4a7a7ded6" gracePeriod=2 Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.050116 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wkn2d_2d741dd8-c85c-4a72-af3f-684820db766f/registry-server/0.log" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.099216 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xrm4k_5e150fad-06a0-4be0-a63d-5ca05ea1b1e5/extract-utilities/0.log" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.225727 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vczf7" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.299165 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xrm4k_5e150fad-06a0-4be0-a63d-5ca05ea1b1e5/extract-content/0.log" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.307121 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f32f08aa-0df5-4400-8e4b-4d8e2346f792-catalog-content\") pod \"f32f08aa-0df5-4400-8e4b-4d8e2346f792\" (UID: \"f32f08aa-0df5-4400-8e4b-4d8e2346f792\") " Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.307302 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f32f08aa-0df5-4400-8e4b-4d8e2346f792-utilities\") pod \"f32f08aa-0df5-4400-8e4b-4d8e2346f792\" (UID: \"f32f08aa-0df5-4400-8e4b-4d8e2346f792\") " Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.307793 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f32f08aa-0df5-4400-8e4b-4d8e2346f792-utilities" (OuterVolumeSpecName: "utilities") pod "f32f08aa-0df5-4400-8e4b-4d8e2346f792" (UID: "f32f08aa-0df5-4400-8e4b-4d8e2346f792"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.307903 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s6sm\" (UniqueName: \"kubernetes.io/projected/f32f08aa-0df5-4400-8e4b-4d8e2346f792-kube-api-access-4s6sm\") pod \"f32f08aa-0df5-4400-8e4b-4d8e2346f792\" (UID: \"f32f08aa-0df5-4400-8e4b-4d8e2346f792\") " Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.308986 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f32f08aa-0df5-4400-8e4b-4d8e2346f792-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.313973 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f32f08aa-0df5-4400-8e4b-4d8e2346f792-kube-api-access-4s6sm" (OuterVolumeSpecName: "kube-api-access-4s6sm") pod "f32f08aa-0df5-4400-8e4b-4d8e2346f792" (UID: "f32f08aa-0df5-4400-8e4b-4d8e2346f792"). InnerVolumeSpecName "kube-api-access-4s6sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.332971 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xrm4k_5e150fad-06a0-4be0-a63d-5ca05ea1b1e5/extract-utilities/0.log" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.375505 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xrm4k_5e150fad-06a0-4be0-a63d-5ca05ea1b1e5/extract-content/0.log" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.410927 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s6sm\" (UniqueName: \"kubernetes.io/projected/f32f08aa-0df5-4400-8e4b-4d8e2346f792-kube-api-access-4s6sm\") on node \"crc\" DevicePath \"\"" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.433594 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f32f08aa-0df5-4400-8e4b-4d8e2346f792-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f32f08aa-0df5-4400-8e4b-4d8e2346f792" (UID: "f32f08aa-0df5-4400-8e4b-4d8e2346f792"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.464214 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.464345 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:19:06 crc kubenswrapper[4730]: E0131 17:19:06.464790 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.512550 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f32f08aa-0df5-4400-8e4b-4d8e2346f792-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.569823 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xrm4k_5e150fad-06a0-4be0-a63d-5ca05ea1b1e5/extract-content/0.log" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.582551 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xrm4k_5e150fad-06a0-4be0-a63d-5ca05ea1b1e5/extract-utilities/0.log" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.723815 4730 generic.go:334] "Generic (PLEG): container finished" podID="f32f08aa-0df5-4400-8e4b-4d8e2346f792" containerID="55ce49fe370625b77cc928deaad3831f98a191414ee861f83bd748c4a7a7ded6" exitCode=0 Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.723856 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vczf7" event={"ID":"f32f08aa-0df5-4400-8e4b-4d8e2346f792","Type":"ContainerDied","Data":"55ce49fe370625b77cc928deaad3831f98a191414ee861f83bd748c4a7a7ded6"} Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.723882 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vczf7" event={"ID":"f32f08aa-0df5-4400-8e4b-4d8e2346f792","Type":"ContainerDied","Data":"97acb5d6ee036d4ce00103589070a2867cad6d3e57d7d2d81698dcf5332f69db"} Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.723897 4730 scope.go:117] "RemoveContainer" containerID="55ce49fe370625b77cc928deaad3831f98a191414ee861f83bd748c4a7a7ded6" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.724013 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vczf7" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.764218 4730 scope.go:117] "RemoveContainer" containerID="fa0c723aa1ba0ea7020bf5aadbb9db1ef8631bc94156e7ef84840074d8ea778d" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.768884 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vczf7"] Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.778869 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vczf7"] Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.804533 4730 scope.go:117] "RemoveContainer" containerID="cbbd23e971ae9475eaa4d3db4c73c39319c05c1a1923a7144f8df4f9583e63ca" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.859106 4730 scope.go:117] "RemoveContainer" containerID="55ce49fe370625b77cc928deaad3831f98a191414ee861f83bd748c4a7a7ded6" Jan 31 17:19:06 crc kubenswrapper[4730]: E0131 17:19:06.859541 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55ce49fe370625b77cc928deaad3831f98a191414ee861f83bd748c4a7a7ded6\": container with ID starting with 55ce49fe370625b77cc928deaad3831f98a191414ee861f83bd748c4a7a7ded6 not found: ID does not exist" containerID="55ce49fe370625b77cc928deaad3831f98a191414ee861f83bd748c4a7a7ded6" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.859577 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55ce49fe370625b77cc928deaad3831f98a191414ee861f83bd748c4a7a7ded6"} err="failed to get container status \"55ce49fe370625b77cc928deaad3831f98a191414ee861f83bd748c4a7a7ded6\": rpc error: code = NotFound desc = could not find container \"55ce49fe370625b77cc928deaad3831f98a191414ee861f83bd748c4a7a7ded6\": container with ID starting with 55ce49fe370625b77cc928deaad3831f98a191414ee861f83bd748c4a7a7ded6 not found: ID does not exist" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.859635 4730 scope.go:117] "RemoveContainer" containerID="fa0c723aa1ba0ea7020bf5aadbb9db1ef8631bc94156e7ef84840074d8ea778d" Jan 31 17:19:06 crc kubenswrapper[4730]: E0131 17:19:06.860059 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa0c723aa1ba0ea7020bf5aadbb9db1ef8631bc94156e7ef84840074d8ea778d\": container with ID starting with fa0c723aa1ba0ea7020bf5aadbb9db1ef8631bc94156e7ef84840074d8ea778d not found: ID does not exist" containerID="fa0c723aa1ba0ea7020bf5aadbb9db1ef8631bc94156e7ef84840074d8ea778d" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.860108 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0c723aa1ba0ea7020bf5aadbb9db1ef8631bc94156e7ef84840074d8ea778d"} err="failed to get container status \"fa0c723aa1ba0ea7020bf5aadbb9db1ef8631bc94156e7ef84840074d8ea778d\": rpc error: code = NotFound desc = could not find container \"fa0c723aa1ba0ea7020bf5aadbb9db1ef8631bc94156e7ef84840074d8ea778d\": container with ID starting with fa0c723aa1ba0ea7020bf5aadbb9db1ef8631bc94156e7ef84840074d8ea778d not found: ID does not exist" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.860136 4730 scope.go:117] "RemoveContainer" containerID="cbbd23e971ae9475eaa4d3db4c73c39319c05c1a1923a7144f8df4f9583e63ca" Jan 31 17:19:06 crc kubenswrapper[4730]: E0131 17:19:06.860565 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbbd23e971ae9475eaa4d3db4c73c39319c05c1a1923a7144f8df4f9583e63ca\": container with ID starting with cbbd23e971ae9475eaa4d3db4c73c39319c05c1a1923a7144f8df4f9583e63ca not found: ID does not exist" containerID="cbbd23e971ae9475eaa4d3db4c73c39319c05c1a1923a7144f8df4f9583e63ca" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.860592 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbbd23e971ae9475eaa4d3db4c73c39319c05c1a1923a7144f8df4f9583e63ca"} err="failed to get container status \"cbbd23e971ae9475eaa4d3db4c73c39319c05c1a1923a7144f8df4f9583e63ca\": rpc error: code = NotFound desc = could not find container \"cbbd23e971ae9475eaa4d3db4c73c39319c05c1a1923a7144f8df4f9583e63ca\": container with ID starting with cbbd23e971ae9475eaa4d3db4c73c39319c05c1a1923a7144f8df4f9583e63ca not found: ID does not exist" Jan 31 17:19:06 crc kubenswrapper[4730]: I0131 17:19:06.909631 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7c7m8_a8f25085-b681-4c8d-a35e-363253891c50/marketplace-operator/0.log" Jan 31 17:19:07 crc kubenswrapper[4730]: I0131 17:19:07.102223 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xrm4k_5e150fad-06a0-4be0-a63d-5ca05ea1b1e5/registry-server/0.log" Jan 31 17:19:07 crc kubenswrapper[4730]: I0131 17:19:07.136489 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mjjwq_5e7da571-bfe1-4d2b-b903-1ad7e91743fa/extract-utilities/0.log" Jan 31 17:19:07 crc kubenswrapper[4730]: I0131 17:19:07.303836 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mjjwq_5e7da571-bfe1-4d2b-b903-1ad7e91743fa/extract-utilities/0.log" Jan 31 17:19:07 crc kubenswrapper[4730]: I0131 17:19:07.331773 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mjjwq_5e7da571-bfe1-4d2b-b903-1ad7e91743fa/extract-content/0.log" Jan 31 17:19:07 crc kubenswrapper[4730]: I0131 17:19:07.372597 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mjjwq_5e7da571-bfe1-4d2b-b903-1ad7e91743fa/extract-content/0.log" Jan 31 17:19:07 crc kubenswrapper[4730]: I0131 17:19:07.567888 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mjjwq_5e7da571-bfe1-4d2b-b903-1ad7e91743fa/extract-content/0.log" Jan 31 17:19:07 crc kubenswrapper[4730]: I0131 17:19:07.584109 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mjjwq_5e7da571-bfe1-4d2b-b903-1ad7e91743fa/extract-utilities/0.log" Jan 31 17:19:07 crc kubenswrapper[4730]: I0131 17:19:07.658022 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mjjwq_5e7da571-bfe1-4d2b-b903-1ad7e91743fa/registry-server/0.log" Jan 31 17:19:07 crc kubenswrapper[4730]: I0131 17:19:07.770785 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-shd46_d14e024e-91a6-4a1d-be75-7b2588eea935/extract-utilities/0.log" Jan 31 17:19:07 crc kubenswrapper[4730]: I0131 17:19:07.970759 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-shd46_d14e024e-91a6-4a1d-be75-7b2588eea935/extract-content/0.log" Jan 31 17:19:08 crc kubenswrapper[4730]: I0131 17:19:08.003203 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-shd46_d14e024e-91a6-4a1d-be75-7b2588eea935/extract-utilities/0.log" Jan 31 17:19:08 crc kubenswrapper[4730]: I0131 17:19:08.016395 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-shd46_d14e024e-91a6-4a1d-be75-7b2588eea935/extract-content/0.log" Jan 31 17:19:08 crc kubenswrapper[4730]: I0131 17:19:08.141228 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-shd46_d14e024e-91a6-4a1d-be75-7b2588eea935/extract-utilities/0.log" Jan 31 17:19:08 crc kubenswrapper[4730]: I0131 17:19:08.304442 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-shd46_d14e024e-91a6-4a1d-be75-7b2588eea935/extract-content/0.log" Jan 31 17:19:08 crc kubenswrapper[4730]: I0131 17:19:08.478022 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f32f08aa-0df5-4400-8e4b-4d8e2346f792" path="/var/lib/kubelet/pods/f32f08aa-0df5-4400-8e4b-4d8e2346f792/volumes" Jan 31 17:19:08 crc kubenswrapper[4730]: I0131 17:19:08.519433 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-shd46_d14e024e-91a6-4a1d-be75-7b2588eea935/registry-server/0.log" Jan 31 17:19:12 crc kubenswrapper[4730]: I0131 17:19:12.465176 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:19:12 crc kubenswrapper[4730]: I0131 17:19:12.466967 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:19:12 crc kubenswrapper[4730]: I0131 17:19:12.467179 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:19:12 crc kubenswrapper[4730]: I0131 17:19:12.467277 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:19:12 crc kubenswrapper[4730]: E0131 17:19:12.467835 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:19:14 crc kubenswrapper[4730]: I0131 17:19:14.800567 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" exitCode=1 Jan 31 17:19:14 crc kubenswrapper[4730]: I0131 17:19:14.800774 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95"} Jan 31 17:19:14 crc kubenswrapper[4730]: I0131 17:19:14.800954 4730 scope.go:117] "RemoveContainer" containerID="6c4c9dccdf26d909459ee9f26637cb9a536819577c2198b9868d71911780d752" Jan 31 17:19:14 crc kubenswrapper[4730]: I0131 17:19:14.802086 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:19:14 crc kubenswrapper[4730]: I0131 17:19:14.802218 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:19:14 crc kubenswrapper[4730]: I0131 17:19:14.802268 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:19:14 crc kubenswrapper[4730]: I0131 17:19:14.802385 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:19:14 crc kubenswrapper[4730]: I0131 17:19:14.802404 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:19:14 crc kubenswrapper[4730]: E0131 17:19:14.803086 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:19:20 crc kubenswrapper[4730]: I0131 17:19:20.466827 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:19:20 crc kubenswrapper[4730]: I0131 17:19:20.467359 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:19:20 crc kubenswrapper[4730]: E0131 17:19:20.643693 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:19:20 crc kubenswrapper[4730]: I0131 17:19:20.853524 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0"} Jan 31 17:19:20 crc kubenswrapper[4730]: I0131 17:19:20.854037 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:19:20 crc kubenswrapper[4730]: I0131 17:19:20.854479 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:19:20 crc kubenswrapper[4730]: E0131 17:19:20.854773 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:19:21 crc kubenswrapper[4730]: I0131 17:19:21.860842 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:19:21 crc kubenswrapper[4730]: E0131 17:19:21.861148 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:19:22 crc kubenswrapper[4730]: I0131 17:19:22.869941 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" exitCode=1 Jan 31 17:19:22 crc kubenswrapper[4730]: I0131 17:19:22.870005 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0"} Jan 31 17:19:22 crc kubenswrapper[4730]: I0131 17:19:22.870289 4730 scope.go:117] "RemoveContainer" containerID="3ae1e981fa5c79eb8c2ec973d31b05b2fedc6f414a0791c43837609737e681fe" Jan 31 17:19:22 crc kubenswrapper[4730]: I0131 17:19:22.870995 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:19:22 crc kubenswrapper[4730]: I0131 17:19:22.871011 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:19:22 crc kubenswrapper[4730]: E0131 17:19:22.871249 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:19:24 crc kubenswrapper[4730]: I0131 17:19:24.653156 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:19:24 crc kubenswrapper[4730]: I0131 17:19:24.653724 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:19:24 crc kubenswrapper[4730]: I0131 17:19:24.653737 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:19:24 crc kubenswrapper[4730]: E0131 17:19:24.653972 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:19:25 crc kubenswrapper[4730]: I0131 17:19:25.464794 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:19:25 crc kubenswrapper[4730]: I0131 17:19:25.464922 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:19:25 crc kubenswrapper[4730]: I0131 17:19:25.464944 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:19:25 crc kubenswrapper[4730]: I0131 17:19:25.464992 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:19:25 crc kubenswrapper[4730]: I0131 17:19:25.464999 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:19:25 crc kubenswrapper[4730]: E0131 17:19:25.465343 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:19:36 crc kubenswrapper[4730]: I0131 17:19:36.465023 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:19:36 crc kubenswrapper[4730]: I0131 17:19:36.465477 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:19:36 crc kubenswrapper[4730]: E0131 17:19:36.465733 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:19:38 crc kubenswrapper[4730]: I0131 17:19:38.464754 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:19:38 crc kubenswrapper[4730]: I0131 17:19:38.465123 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:19:38 crc kubenswrapper[4730]: I0131 17:19:38.465151 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:19:38 crc kubenswrapper[4730]: I0131 17:19:38.465200 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:19:38 crc kubenswrapper[4730]: I0131 17:19:38.465206 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:19:38 crc kubenswrapper[4730]: E0131 17:19:38.465606 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:19:45 crc kubenswrapper[4730]: I0131 17:19:45.614506 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:19:45 crc kubenswrapper[4730]: E0131 17:19:45.614658 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 17:19:45 crc kubenswrapper[4730]: E0131 17:19:45.615089 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 17:21:47.615072583 +0000 UTC m=+3094.421129499 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 17:19:50 crc kubenswrapper[4730]: I0131 17:19:50.464950 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:19:50 crc kubenswrapper[4730]: I0131 17:19:50.465540 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:19:50 crc kubenswrapper[4730]: E0131 17:19:50.466001 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:19:53 crc kubenswrapper[4730]: I0131 17:19:53.465118 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:19:53 crc kubenswrapper[4730]: I0131 17:19:53.465823 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:19:53 crc kubenswrapper[4730]: I0131 17:19:53.465858 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:19:53 crc kubenswrapper[4730]: I0131 17:19:53.465922 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:19:53 crc kubenswrapper[4730]: I0131 17:19:53.465930 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:19:53 crc kubenswrapper[4730]: E0131 17:19:53.466376 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:20:01 crc kubenswrapper[4730]: I0131 17:20:01.465146 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:20:01 crc kubenswrapper[4730]: I0131 17:20:01.465791 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:20:01 crc kubenswrapper[4730]: E0131 17:20:01.466075 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:20:02 crc kubenswrapper[4730]: E0131 17:20:02.143447 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 17:20:02 crc kubenswrapper[4730]: I0131 17:20:02.176652 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:20:06 crc kubenswrapper[4730]: I0131 17:20:06.467056 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:20:06 crc kubenswrapper[4730]: I0131 17:20:06.467788 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:20:06 crc kubenswrapper[4730]: I0131 17:20:06.467886 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:20:06 crc kubenswrapper[4730]: I0131 17:20:06.468043 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:20:06 crc kubenswrapper[4730]: I0131 17:20:06.468057 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:20:06 crc kubenswrapper[4730]: E0131 17:20:06.469119 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:20:15 crc kubenswrapper[4730]: I0131 17:20:15.467252 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:20:15 crc kubenswrapper[4730]: I0131 17:20:15.468860 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:20:15 crc kubenswrapper[4730]: E0131 17:20:15.469769 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:20:20 crc kubenswrapper[4730]: I0131 17:20:20.465251 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:20:20 crc kubenswrapper[4730]: I0131 17:20:20.465956 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:20:20 crc kubenswrapper[4730]: I0131 17:20:20.466002 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:20:20 crc kubenswrapper[4730]: I0131 17:20:20.466097 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:20:20 crc kubenswrapper[4730]: I0131 17:20:20.466110 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:20:20 crc kubenswrapper[4730]: E0131 17:20:20.466784 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:20:30 crc kubenswrapper[4730]: I0131 17:20:30.465281 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:20:30 crc kubenswrapper[4730]: I0131 17:20:30.468065 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:20:30 crc kubenswrapper[4730]: E0131 17:20:30.469069 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:20:32 crc kubenswrapper[4730]: I0131 17:20:32.465972 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:20:32 crc kubenswrapper[4730]: I0131 17:20:32.466108 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:20:32 crc kubenswrapper[4730]: I0131 17:20:32.466157 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:20:32 crc kubenswrapper[4730]: I0131 17:20:32.466260 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:20:32 crc kubenswrapper[4730]: I0131 17:20:32.466274 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:20:32 crc kubenswrapper[4730]: E0131 17:20:32.467039 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:20:44 crc kubenswrapper[4730]: I0131 17:20:44.465765 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:20:44 crc kubenswrapper[4730]: I0131 17:20:44.466619 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:20:44 crc kubenswrapper[4730]: I0131 17:20:44.466692 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:20:44 crc kubenswrapper[4730]: I0131 17:20:44.466871 4730 scope.go:117] "RemoveContainer" containerID="ff1a084abdecca30e20d061c24df7297fd11c297e2b3bf63a6481e639349f457" Jan 31 17:20:44 crc kubenswrapper[4730]: I0131 17:20:44.466893 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:20:44 crc kubenswrapper[4730]: I0131 17:20:44.467200 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:20:44 crc kubenswrapper[4730]: I0131 17:20:44.467231 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:20:44 crc kubenswrapper[4730]: E0131 17:20:44.467522 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:20:44 crc kubenswrapper[4730]: E0131 17:20:44.729771 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:20:44 crc kubenswrapper[4730]: I0131 17:20:44.923621 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"06f89904e1a6b7d765c2401e50055bd19f770ebfd54879a31090a2322df88877"} Jan 31 17:20:44 crc kubenswrapper[4730]: I0131 17:20:44.924557 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:20:44 crc kubenswrapper[4730]: I0131 17:20:44.924651 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:20:44 crc kubenswrapper[4730]: I0131 17:20:44.924684 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:20:44 crc kubenswrapper[4730]: I0131 17:20:44.924771 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:20:44 crc kubenswrapper[4730]: E0131 17:20:44.925231 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:20:45 crc kubenswrapper[4730]: I0131 17:20:45.933467 4730 generic.go:334] "Generic (PLEG): container finished" podID="56d88f94-8bbf-4f46-883d-7d370f7b7e33" containerID="c6f969ee869575d0a7ec8770d6682e7f0ceef84fe2d202282918812c3d3435f0" exitCode=0 Jan 31 17:20:45 crc kubenswrapper[4730]: I0131 17:20:45.933577 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rhb8m/must-gather-wdrtz" event={"ID":"56d88f94-8bbf-4f46-883d-7d370f7b7e33","Type":"ContainerDied","Data":"c6f969ee869575d0a7ec8770d6682e7f0ceef84fe2d202282918812c3d3435f0"} Jan 31 17:20:45 crc kubenswrapper[4730]: I0131 17:20:45.934477 4730 scope.go:117] "RemoveContainer" containerID="c6f969ee869575d0a7ec8770d6682e7f0ceef84fe2d202282918812c3d3435f0" Jan 31 17:20:46 crc kubenswrapper[4730]: I0131 17:20:46.320447 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rhb8m_must-gather-wdrtz_56d88f94-8bbf-4f46-883d-7d370f7b7e33/gather/0.log" Jan 31 17:20:53 crc kubenswrapper[4730]: I0131 17:20:53.838437 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rhb8m/must-gather-wdrtz"] Jan 31 17:20:53 crc kubenswrapper[4730]: I0131 17:20:53.839403 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rhb8m/must-gather-wdrtz" podUID="56d88f94-8bbf-4f46-883d-7d370f7b7e33" containerName="copy" containerID="cri-o://ee6da4e03bfaaf100a360969f6dcb54cbff7e71c20916a9dabf9d6a159b39a50" gracePeriod=2 Jan 31 17:20:53 crc kubenswrapper[4730]: I0131 17:20:53.852084 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rhb8m/must-gather-wdrtz"] Jan 31 17:20:54 crc kubenswrapper[4730]: I0131 17:20:54.008439 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rhb8m_must-gather-wdrtz_56d88f94-8bbf-4f46-883d-7d370f7b7e33/copy/0.log" Jan 31 17:20:54 crc kubenswrapper[4730]: I0131 17:20:54.008771 4730 generic.go:334] "Generic (PLEG): container finished" podID="56d88f94-8bbf-4f46-883d-7d370f7b7e33" containerID="ee6da4e03bfaaf100a360969f6dcb54cbff7e71c20916a9dabf9d6a159b39a50" exitCode=143 Jan 31 17:20:54 crc kubenswrapper[4730]: I0131 17:20:54.273482 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rhb8m_must-gather-wdrtz_56d88f94-8bbf-4f46-883d-7d370f7b7e33/copy/0.log" Jan 31 17:20:54 crc kubenswrapper[4730]: I0131 17:20:54.274109 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rhb8m/must-gather-wdrtz" Jan 31 17:20:54 crc kubenswrapper[4730]: I0131 17:20:54.290861 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xnm7\" (UniqueName: \"kubernetes.io/projected/56d88f94-8bbf-4f46-883d-7d370f7b7e33-kube-api-access-4xnm7\") pod \"56d88f94-8bbf-4f46-883d-7d370f7b7e33\" (UID: \"56d88f94-8bbf-4f46-883d-7d370f7b7e33\") " Jan 31 17:20:54 crc kubenswrapper[4730]: I0131 17:20:54.291019 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56d88f94-8bbf-4f46-883d-7d370f7b7e33-must-gather-output\") pod \"56d88f94-8bbf-4f46-883d-7d370f7b7e33\" (UID: \"56d88f94-8bbf-4f46-883d-7d370f7b7e33\") " Jan 31 17:20:54 crc kubenswrapper[4730]: I0131 17:20:54.296468 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d88f94-8bbf-4f46-883d-7d370f7b7e33-kube-api-access-4xnm7" (OuterVolumeSpecName: "kube-api-access-4xnm7") pod "56d88f94-8bbf-4f46-883d-7d370f7b7e33" (UID: "56d88f94-8bbf-4f46-883d-7d370f7b7e33"). InnerVolumeSpecName "kube-api-access-4xnm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 17:20:54 crc kubenswrapper[4730]: I0131 17:20:54.393593 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xnm7\" (UniqueName: \"kubernetes.io/projected/56d88f94-8bbf-4f46-883d-7d370f7b7e33-kube-api-access-4xnm7\") on node \"crc\" DevicePath \"\"" Jan 31 17:20:54 crc kubenswrapper[4730]: I0131 17:20:54.430277 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56d88f94-8bbf-4f46-883d-7d370f7b7e33-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "56d88f94-8bbf-4f46-883d-7d370f7b7e33" (UID: "56d88f94-8bbf-4f46-883d-7d370f7b7e33"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 17:20:54 crc kubenswrapper[4730]: I0131 17:20:54.474622 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d88f94-8bbf-4f46-883d-7d370f7b7e33" path="/var/lib/kubelet/pods/56d88f94-8bbf-4f46-883d-7d370f7b7e33/volumes" Jan 31 17:20:54 crc kubenswrapper[4730]: I0131 17:20:54.495825 4730 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56d88f94-8bbf-4f46-883d-7d370f7b7e33-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 17:20:55 crc kubenswrapper[4730]: I0131 17:20:55.017442 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rhb8m_must-gather-wdrtz_56d88f94-8bbf-4f46-883d-7d370f7b7e33/copy/0.log" Jan 31 17:20:55 crc kubenswrapper[4730]: I0131 17:20:55.017799 4730 scope.go:117] "RemoveContainer" containerID="ee6da4e03bfaaf100a360969f6dcb54cbff7e71c20916a9dabf9d6a159b39a50" Jan 31 17:20:55 crc kubenswrapper[4730]: I0131 17:20:55.017893 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rhb8m/must-gather-wdrtz" Jan 31 17:20:55 crc kubenswrapper[4730]: I0131 17:20:55.036862 4730 scope.go:117] "RemoveContainer" containerID="c6f969ee869575d0a7ec8770d6682e7f0ceef84fe2d202282918812c3d3435f0" Jan 31 17:20:55 crc kubenswrapper[4730]: I0131 17:20:55.464507 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:20:55 crc kubenswrapper[4730]: I0131 17:20:55.464534 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:20:55 crc kubenswrapper[4730]: E0131 17:20:55.660658 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:20:56 crc kubenswrapper[4730]: I0131 17:20:56.040372 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"298961e45fc2f90905df473c02b154103f3f3a4e3f7849be9e93f37e960693c0"} Jan 31 17:20:56 crc kubenswrapper[4730]: I0131 17:20:56.041319 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:20:56 crc kubenswrapper[4730]: E0131 17:20:56.041708 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:20:56 crc kubenswrapper[4730]: I0131 17:20:56.041953 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:20:57 crc kubenswrapper[4730]: I0131 17:20:57.051494 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:20:57 crc kubenswrapper[4730]: E0131 17:20:57.051684 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:20:59 crc kubenswrapper[4730]: I0131 17:20:59.463733 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:20:59 crc kubenswrapper[4730]: I0131 17:20:59.464022 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:20:59 crc kubenswrapper[4730]: I0131 17:20:59.464043 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:20:59 crc kubenswrapper[4730]: I0131 17:20:59.464103 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:20:59 crc kubenswrapper[4730]: E0131 17:20:59.464354 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:21:00 crc kubenswrapper[4730]: I0131 17:21:00.663541 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:21:00 crc kubenswrapper[4730]: I0131 17:21:00.663865 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:21:03 crc kubenswrapper[4730]: I0131 17:21:03.666920 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:21:05 crc kubenswrapper[4730]: I0131 17:21:05.661834 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:21:06 crc kubenswrapper[4730]: I0131 17:21:06.657445 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:21:06 crc kubenswrapper[4730]: I0131 17:21:06.657975 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:21:06 crc kubenswrapper[4730]: I0131 17:21:06.658765 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"298961e45fc2f90905df473c02b154103f3f3a4e3f7849be9e93f37e960693c0"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 17:21:06 crc kubenswrapper[4730]: I0131 17:21:06.658795 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:21:06 crc kubenswrapper[4730]: I0131 17:21:06.658844 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://298961e45fc2f90905df473c02b154103f3f3a4e3f7849be9e93f37e960693c0" gracePeriod=30 Jan 31 17:21:06 crc kubenswrapper[4730]: I0131 17:21:06.675713 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:21:07 crc kubenswrapper[4730]: E0131 17:21:07.116714 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:21:07 crc kubenswrapper[4730]: I0131 17:21:07.175941 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="298961e45fc2f90905df473c02b154103f3f3a4e3f7849be9e93f37e960693c0" exitCode=0 Jan 31 17:21:07 crc kubenswrapper[4730]: I0131 17:21:07.176006 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"298961e45fc2f90905df473c02b154103f3f3a4e3f7849be9e93f37e960693c0"} Jan 31 17:21:07 crc kubenswrapper[4730]: I0131 17:21:07.176038 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerStarted","Data":"29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a"} Jan 31 17:21:07 crc kubenswrapper[4730]: I0131 17:21:07.176081 4730 scope.go:117] "RemoveContainer" containerID="cd5f5e7dfe96da0af2178b6cb9fa034e084e29b6305ca046585f735cdb31495f" Jan 31 17:21:07 crc kubenswrapper[4730]: I0131 17:21:07.176393 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:21:07 crc kubenswrapper[4730]: I0131 17:21:07.177123 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:21:07 crc kubenswrapper[4730]: E0131 17:21:07.177440 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:21:08 crc kubenswrapper[4730]: I0131 17:21:08.192160 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:21:08 crc kubenswrapper[4730]: E0131 17:21:08.192943 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:21:12 crc kubenswrapper[4730]: I0131 17:21:12.664079 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:21:14 crc kubenswrapper[4730]: I0131 17:21:14.473627 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:21:14 crc kubenswrapper[4730]: I0131 17:21:14.474056 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:21:14 crc kubenswrapper[4730]: I0131 17:21:14.474104 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:21:14 crc kubenswrapper[4730]: I0131 17:21:14.474229 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:21:14 crc kubenswrapper[4730]: E0131 17:21:14.474933 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:21:15 crc kubenswrapper[4730]: I0131 17:21:15.912267 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:21:15 crc kubenswrapper[4730]: I0131 17:21:15.913088 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:21:18 crc kubenswrapper[4730]: I0131 17:21:18.660010 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 17:21:18 crc kubenswrapper[4730]: I0131 17:21:18.660792 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/swift-proxy-5867f46d87-f8rf9" Jan 31 17:21:18 crc kubenswrapper[4730]: I0131 17:21:18.662074 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a"} pod="openstack/swift-proxy-5867f46d87-f8rf9" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Jan 31 17:21:18 crc kubenswrapper[4730]: I0131 17:21:18.662106 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:21:18 crc kubenswrapper[4730]: I0131 17:21:18.662169 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" containerID="cri-o://29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a" gracePeriod=30 Jan 31 17:21:18 crc kubenswrapper[4730]: I0131 17:21:18.670311 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.176:8080/healthcheck\": read tcp 10.217.0.2:50532->10.217.0.176:8080: read: connection reset by peer" Jan 31 17:21:18 crc kubenswrapper[4730]: E0131 17:21:18.779030 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:21:19 crc kubenswrapper[4730]: I0131 17:21:19.302386 4730 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" containerID="29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a" exitCode=0 Jan 31 17:21:19 crc kubenswrapper[4730]: I0131 17:21:19.302465 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5867f46d87-f8rf9" event={"ID":"4c3d9aec-6a99-480d-a7f3-0703ac92db04","Type":"ContainerDied","Data":"29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a"} Jan 31 17:21:19 crc kubenswrapper[4730]: I0131 17:21:19.302779 4730 scope.go:117] "RemoveContainer" containerID="298961e45fc2f90905df473c02b154103f3f3a4e3f7849be9e93f37e960693c0" Jan 31 17:21:19 crc kubenswrapper[4730]: I0131 17:21:19.304398 4730 scope.go:117] "RemoveContainer" containerID="29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a" Jan 31 17:21:19 crc kubenswrapper[4730]: I0131 17:21:19.304548 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:21:19 crc kubenswrapper[4730]: E0131 17:21:19.306474 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:21:25 crc kubenswrapper[4730]: I0131 17:21:25.466643 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:21:25 crc kubenswrapper[4730]: I0131 17:21:25.467937 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:21:25 crc kubenswrapper[4730]: I0131 17:21:25.468005 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:21:25 crc kubenswrapper[4730]: I0131 17:21:25.468173 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:21:25 crc kubenswrapper[4730]: E0131 17:21:25.469093 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:21:26 crc kubenswrapper[4730]: I0131 17:21:26.976381 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 17:21:26 crc kubenswrapper[4730]: I0131 17:21:26.976693 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 17:21:33 crc kubenswrapper[4730]: I0131 17:21:33.464310 4730 scope.go:117] "RemoveContainer" containerID="29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a" Jan 31 17:21:33 crc kubenswrapper[4730]: I0131 17:21:33.464641 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:21:33 crc kubenswrapper[4730]: E0131 17:21:33.465042 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:21:40 crc kubenswrapper[4730]: I0131 17:21:40.469966 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:21:40 crc kubenswrapper[4730]: I0131 17:21:40.470595 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:21:40 crc kubenswrapper[4730]: I0131 17:21:40.470627 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:21:40 crc kubenswrapper[4730]: I0131 17:21:40.470699 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:21:40 crc kubenswrapper[4730]: E0131 17:21:40.471183 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:21:45 crc kubenswrapper[4730]: I0131 17:21:45.464579 4730 scope.go:117] "RemoveContainer" containerID="29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a" Jan 31 17:21:45 crc kubenswrapper[4730]: I0131 17:21:45.465208 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:21:45 crc kubenswrapper[4730]: E0131 17:21:45.465666 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:21:47 crc kubenswrapper[4730]: I0131 17:21:47.679538 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:21:47 crc kubenswrapper[4730]: E0131 17:21:47.679719 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 17:21:47 crc kubenswrapper[4730]: E0131 17:21:47.680622 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 17:23:49.680595742 +0000 UTC m=+3216.486652688 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 17:21:55 crc kubenswrapper[4730]: I0131 17:21:55.464523 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:21:55 crc kubenswrapper[4730]: I0131 17:21:55.465114 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:21:55 crc kubenswrapper[4730]: I0131 17:21:55.465142 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:21:55 crc kubenswrapper[4730]: I0131 17:21:55.465219 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:21:55 crc kubenswrapper[4730]: E0131 17:21:55.465579 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:21:56 crc kubenswrapper[4730]: I0131 17:21:56.974986 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 17:21:56 crc kubenswrapper[4730]: I0131 17:21:56.975683 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 17:21:57 crc kubenswrapper[4730]: I0131 17:21:57.465195 4730 scope.go:117] "RemoveContainer" containerID="29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a" Jan 31 17:21:57 crc kubenswrapper[4730]: I0131 17:21:57.465477 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:21:57 crc kubenswrapper[4730]: E0131 17:21:57.465792 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:22:05 crc kubenswrapper[4730]: E0131 17:22:05.178695 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-ring-rebalance-md2pb" podUID="62d8ac66-dbb1-4b02-844e-13123934241d" Jan 31 17:22:05 crc kubenswrapper[4730]: I0131 17:22:05.724636 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:22:09 crc kubenswrapper[4730]: I0131 17:22:09.464682 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:22:09 crc kubenswrapper[4730]: I0131 17:22:09.465433 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:22:09 crc kubenswrapper[4730]: I0131 17:22:09.465476 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:22:09 crc kubenswrapper[4730]: I0131 17:22:09.465572 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:22:09 crc kubenswrapper[4730]: E0131 17:22:09.466057 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:22:09 crc kubenswrapper[4730]: I0131 17:22:09.466671 4730 scope.go:117] "RemoveContainer" containerID="29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a" Jan 31 17:22:09 crc kubenswrapper[4730]: I0131 17:22:09.466696 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:22:09 crc kubenswrapper[4730]: E0131 17:22:09.467062 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:22:21 crc kubenswrapper[4730]: I0131 17:22:21.465728 4730 scope.go:117] "RemoveContainer" containerID="29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a" Jan 31 17:22:21 crc kubenswrapper[4730]: I0131 17:22:21.466474 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:22:21 crc kubenswrapper[4730]: E0131 17:22:21.467026 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:22:22 crc kubenswrapper[4730]: I0131 17:22:22.464972 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:22:22 crc kubenswrapper[4730]: I0131 17:22:22.465061 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:22:22 crc kubenswrapper[4730]: I0131 17:22:22.465094 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:22:22 crc kubenswrapper[4730]: I0131 17:22:22.465178 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:22:22 crc kubenswrapper[4730]: E0131 17:22:22.465745 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:22:26 crc kubenswrapper[4730]: I0131 17:22:26.976233 4730 patch_prober.go:28] interesting pod/machine-config-daemon-mzg47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 17:22:26 crc kubenswrapper[4730]: I0131 17:22:26.977037 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 17:22:26 crc kubenswrapper[4730]: I0131 17:22:26.977087 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" Jan 31 17:22:26 crc kubenswrapper[4730]: I0131 17:22:26.977978 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76687ee13b5143ed454a14a2de3825fe6e5a14d76c7ed16820dc4bdf24c6a6f8"} pod="openshift-machine-config-operator/machine-config-daemon-mzg47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 17:22:26 crc kubenswrapper[4730]: I0131 17:22:26.978035 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerName="machine-config-daemon" containerID="cri-o://76687ee13b5143ed454a14a2de3825fe6e5a14d76c7ed16820dc4bdf24c6a6f8" gracePeriod=600 Jan 31 17:22:27 crc kubenswrapper[4730]: E0131 17:22:27.116326 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:22:27 crc kubenswrapper[4730]: I0131 17:22:27.949888 4730 generic.go:334] "Generic (PLEG): container finished" podID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" containerID="76687ee13b5143ed454a14a2de3825fe6e5a14d76c7ed16820dc4bdf24c6a6f8" exitCode=0 Jan 31 17:22:27 crc kubenswrapper[4730]: I0131 17:22:27.949930 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" event={"ID":"47cbebb1-b682-4013-a2d5-7ca2f47f03e6","Type":"ContainerDied","Data":"76687ee13b5143ed454a14a2de3825fe6e5a14d76c7ed16820dc4bdf24c6a6f8"} Jan 31 17:22:27 crc kubenswrapper[4730]: I0131 17:22:27.949960 4730 scope.go:117] "RemoveContainer" containerID="8f0b779e1030f9cbd3ff463a2fefa2b4f4a055fd00a384af88e6f8249382c9c3" Jan 31 17:22:27 crc kubenswrapper[4730]: I0131 17:22:27.951045 4730 scope.go:117] "RemoveContainer" containerID="76687ee13b5143ed454a14a2de3825fe6e5a14d76c7ed16820dc4bdf24c6a6f8" Jan 31 17:22:27 crc kubenswrapper[4730]: E0131 17:22:27.951686 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:22:34 crc kubenswrapper[4730]: I0131 17:22:34.466067 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:22:34 crc kubenswrapper[4730]: I0131 17:22:34.466582 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:22:34 crc kubenswrapper[4730]: I0131 17:22:34.466607 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:22:34 crc kubenswrapper[4730]: I0131 17:22:34.466676 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:22:34 crc kubenswrapper[4730]: E0131 17:22:34.467654 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:22:36 crc kubenswrapper[4730]: I0131 17:22:36.464114 4730 scope.go:117] "RemoveContainer" containerID="29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a" Jan 31 17:22:36 crc kubenswrapper[4730]: I0131 17:22:36.464580 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:22:36 crc kubenswrapper[4730]: E0131 17:22:36.465010 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:22:38 crc kubenswrapper[4730]: I0131 17:22:38.466193 4730 scope.go:117] "RemoveContainer" containerID="76687ee13b5143ed454a14a2de3825fe6e5a14d76c7ed16820dc4bdf24c6a6f8" Jan 31 17:22:38 crc kubenswrapper[4730]: E0131 17:22:38.468369 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:22:45 crc kubenswrapper[4730]: I0131 17:22:45.465087 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:22:45 crc kubenswrapper[4730]: I0131 17:22:45.467090 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:22:45 crc kubenswrapper[4730]: I0131 17:22:45.467170 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:22:45 crc kubenswrapper[4730]: I0131 17:22:45.467266 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:22:46 crc kubenswrapper[4730]: I0131 17:22:46.120152 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="d69412070ea11bc15bd0366754457f7e5d30159bd64d6968d07c5b7ad604f486" exitCode=1 Jan 31 17:22:46 crc kubenswrapper[4730]: I0131 17:22:46.120327 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerStarted","Data":"416834c2bf497d3904504ea512dd67803794a4a15c0d0243ca0271f0332f3a88"} Jan 31 17:22:46 crc kubenswrapper[4730]: I0131 17:22:46.120862 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"d69412070ea11bc15bd0366754457f7e5d30159bd64d6968d07c5b7ad604f486"} Jan 31 17:22:46 crc kubenswrapper[4730]: I0131 17:22:46.120892 4730 scope.go:117] "RemoveContainer" containerID="10b29d3d48c76c5690770cebc98944878cc3ee985abc748c81bc3435aaf90c90" Jan 31 17:22:46 crc kubenswrapper[4730]: E0131 17:22:46.176488 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:22:47 crc kubenswrapper[4730]: I0131 17:22:47.138778 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="416834c2bf497d3904504ea512dd67803794a4a15c0d0243ca0271f0332f3a88" exitCode=1 Jan 31 17:22:47 crc kubenswrapper[4730]: I0131 17:22:47.138848 4730 generic.go:334] "Generic (PLEG): container finished" podID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" containerID="8f78ab4d4a55577afccef8bab480bf50be88440a4c88ec653d629f7958d4e6fc" exitCode=1 Jan 31 17:22:47 crc kubenswrapper[4730]: I0131 17:22:47.138878 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"416834c2bf497d3904504ea512dd67803794a4a15c0d0243ca0271f0332f3a88"} Jan 31 17:22:47 crc kubenswrapper[4730]: I0131 17:22:47.138919 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3656b8f0-e1d3-4214-9c23-dd437a57f2ad","Type":"ContainerDied","Data":"8f78ab4d4a55577afccef8bab480bf50be88440a4c88ec653d629f7958d4e6fc"} Jan 31 17:22:47 crc kubenswrapper[4730]: I0131 17:22:47.138970 4730 scope.go:117] "RemoveContainer" containerID="2098f8f4c2c92e9fb41984f2427718ddadf6830192127b544e916db3b7efe57a" Jan 31 17:22:47 crc kubenswrapper[4730]: I0131 17:22:47.141181 4730 scope.go:117] "RemoveContainer" containerID="d69412070ea11bc15bd0366754457f7e5d30159bd64d6968d07c5b7ad604f486" Jan 31 17:22:47 crc kubenswrapper[4730]: I0131 17:22:47.141645 4730 scope.go:117] "RemoveContainer" containerID="416834c2bf497d3904504ea512dd67803794a4a15c0d0243ca0271f0332f3a88" Jan 31 17:22:47 crc kubenswrapper[4730]: I0131 17:22:47.144087 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:22:47 crc kubenswrapper[4730]: I0131 17:22:47.144248 4730 scope.go:117] "RemoveContainer" containerID="8f78ab4d4a55577afccef8bab480bf50be88440a4c88ec653d629f7958d4e6fc" Jan 31 17:22:47 crc kubenswrapper[4730]: E0131 17:22:47.145358 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:22:47 crc kubenswrapper[4730]: I0131 17:22:47.233538 4730 scope.go:117] "RemoveContainer" containerID="da85a521b6e4cba829126a99b96114bf039cd27c5bb5e85e06d4937e3254a297" Jan 31 17:22:48 crc kubenswrapper[4730]: I0131 17:22:48.176063 4730 scope.go:117] "RemoveContainer" containerID="d69412070ea11bc15bd0366754457f7e5d30159bd64d6968d07c5b7ad604f486" Jan 31 17:22:48 crc kubenswrapper[4730]: I0131 17:22:48.176338 4730 scope.go:117] "RemoveContainer" containerID="416834c2bf497d3904504ea512dd67803794a4a15c0d0243ca0271f0332f3a88" Jan 31 17:22:48 crc kubenswrapper[4730]: I0131 17:22:48.176360 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:22:48 crc kubenswrapper[4730]: I0131 17:22:48.176418 4730 scope.go:117] "RemoveContainer" containerID="8f78ab4d4a55577afccef8bab480bf50be88440a4c88ec653d629f7958d4e6fc" Jan 31 17:22:48 crc kubenswrapper[4730]: E0131 17:22:48.176696 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:22:50 crc kubenswrapper[4730]: I0131 17:22:50.465154 4730 scope.go:117] "RemoveContainer" containerID="29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a" Jan 31 17:22:50 crc kubenswrapper[4730]: I0131 17:22:50.465413 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:22:50 crc kubenswrapper[4730]: E0131 17:22:50.465988 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:22:51 crc kubenswrapper[4730]: I0131 17:22:51.464777 4730 scope.go:117] "RemoveContainer" containerID="76687ee13b5143ed454a14a2de3825fe6e5a14d76c7ed16820dc4bdf24c6a6f8" Jan 31 17:22:51 crc kubenswrapper[4730]: E0131 17:22:51.465364 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:23:02 crc kubenswrapper[4730]: I0131 17:23:02.464584 4730 scope.go:117] "RemoveContainer" containerID="d69412070ea11bc15bd0366754457f7e5d30159bd64d6968d07c5b7ad604f486" Jan 31 17:23:02 crc kubenswrapper[4730]: I0131 17:23:02.466220 4730 scope.go:117] "RemoveContainer" containerID="416834c2bf497d3904504ea512dd67803794a4a15c0d0243ca0271f0332f3a88" Jan 31 17:23:02 crc kubenswrapper[4730]: I0131 17:23:02.466307 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:23:02 crc kubenswrapper[4730]: I0131 17:23:02.466421 4730 scope.go:117] "RemoveContainer" containerID="8f78ab4d4a55577afccef8bab480bf50be88440a4c88ec653d629f7958d4e6fc" Jan 31 17:23:02 crc kubenswrapper[4730]: E0131 17:23:02.466866 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:23:03 crc kubenswrapper[4730]: I0131 17:23:03.464269 4730 scope.go:117] "RemoveContainer" containerID="29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a" Jan 31 17:23:03 crc kubenswrapper[4730]: I0131 17:23:03.464527 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:23:03 crc kubenswrapper[4730]: E0131 17:23:03.464845 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:23:04 crc kubenswrapper[4730]: I0131 17:23:04.474156 4730 scope.go:117] "RemoveContainer" containerID="76687ee13b5143ed454a14a2de3825fe6e5a14d76c7ed16820dc4bdf24c6a6f8" Jan 31 17:23:04 crc kubenswrapper[4730]: E0131 17:23:04.474713 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:23:14 crc kubenswrapper[4730]: I0131 17:23:14.474564 4730 scope.go:117] "RemoveContainer" containerID="29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a" Jan 31 17:23:14 crc kubenswrapper[4730]: I0131 17:23:14.476697 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:23:14 crc kubenswrapper[4730]: E0131 17:23:14.479625 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:23:15 crc kubenswrapper[4730]: I0131 17:23:15.922376 4730 scope.go:117] "RemoveContainer" containerID="d69412070ea11bc15bd0366754457f7e5d30159bd64d6968d07c5b7ad604f486" Jan 31 17:23:15 crc kubenswrapper[4730]: I0131 17:23:15.922440 4730 scope.go:117] "RemoveContainer" containerID="416834c2bf497d3904504ea512dd67803794a4a15c0d0243ca0271f0332f3a88" Jan 31 17:23:15 crc kubenswrapper[4730]: I0131 17:23:15.922460 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:23:15 crc kubenswrapper[4730]: I0131 17:23:15.922522 4730 scope.go:117] "RemoveContainer" containerID="8f78ab4d4a55577afccef8bab480bf50be88440a4c88ec653d629f7958d4e6fc" Jan 31 17:23:15 crc kubenswrapper[4730]: E0131 17:23:15.922853 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:23:16 crc kubenswrapper[4730]: I0131 17:23:16.464764 4730 scope.go:117] "RemoveContainer" containerID="76687ee13b5143ed454a14a2de3825fe6e5a14d76c7ed16820dc4bdf24c6a6f8" Jan 31 17:23:16 crc kubenswrapper[4730]: E0131 17:23:16.465247 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:23:28 crc kubenswrapper[4730]: I0131 17:23:28.464535 4730 scope.go:117] "RemoveContainer" containerID="29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a" Jan 31 17:23:28 crc kubenswrapper[4730]: I0131 17:23:28.465004 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:23:28 crc kubenswrapper[4730]: E0131 17:23:28.465244 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:23:29 crc kubenswrapper[4730]: I0131 17:23:29.465185 4730 scope.go:117] "RemoveContainer" containerID="76687ee13b5143ed454a14a2de3825fe6e5a14d76c7ed16820dc4bdf24c6a6f8" Jan 31 17:23:29 crc kubenswrapper[4730]: E0131 17:23:29.465554 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:23:29 crc kubenswrapper[4730]: I0131 17:23:29.465717 4730 scope.go:117] "RemoveContainer" containerID="d69412070ea11bc15bd0366754457f7e5d30159bd64d6968d07c5b7ad604f486" Jan 31 17:23:29 crc kubenswrapper[4730]: I0131 17:23:29.465795 4730 scope.go:117] "RemoveContainer" containerID="416834c2bf497d3904504ea512dd67803794a4a15c0d0243ca0271f0332f3a88" Jan 31 17:23:29 crc kubenswrapper[4730]: I0131 17:23:29.465938 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:23:29 crc kubenswrapper[4730]: I0131 17:23:29.466021 4730 scope.go:117] "RemoveContainer" containerID="8f78ab4d4a55577afccef8bab480bf50be88440a4c88ec653d629f7958d4e6fc" Jan 31 17:23:29 crc kubenswrapper[4730]: E0131 17:23:29.466403 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:23:40 crc kubenswrapper[4730]: I0131 17:23:40.464748 4730 scope.go:117] "RemoveContainer" containerID="29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a" Jan 31 17:23:40 crc kubenswrapper[4730]: I0131 17:23:40.465473 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:23:40 crc kubenswrapper[4730]: E0131 17:23:40.465942 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:23:42 crc kubenswrapper[4730]: I0131 17:23:42.464449 4730 scope.go:117] "RemoveContainer" containerID="76687ee13b5143ed454a14a2de3825fe6e5a14d76c7ed16820dc4bdf24c6a6f8" Jan 31 17:23:42 crc kubenswrapper[4730]: E0131 17:23:42.465005 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:23:44 crc kubenswrapper[4730]: I0131 17:23:44.471101 4730 scope.go:117] "RemoveContainer" containerID="d69412070ea11bc15bd0366754457f7e5d30159bd64d6968d07c5b7ad604f486" Jan 31 17:23:44 crc kubenswrapper[4730]: I0131 17:23:44.471541 4730 scope.go:117] "RemoveContainer" containerID="416834c2bf497d3904504ea512dd67803794a4a15c0d0243ca0271f0332f3a88" Jan 31 17:23:44 crc kubenswrapper[4730]: I0131 17:23:44.471594 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:23:44 crc kubenswrapper[4730]: I0131 17:23:44.471704 4730 scope.go:117] "RemoveContainer" containerID="8f78ab4d4a55577afccef8bab480bf50be88440a4c88ec653d629f7958d4e6fc" Jan 31 17:23:44 crc kubenswrapper[4730]: E0131 17:23:44.472408 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:23:49 crc kubenswrapper[4730]: I0131 17:23:49.698979 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices\") pod \"swift-ring-rebalance-md2pb\" (UID: \"62d8ac66-dbb1-4b02-844e-13123934241d\") " pod="openstack/swift-ring-rebalance-md2pb" Jan 31 17:23:49 crc kubenswrapper[4730]: E0131 17:23:49.699197 4730 configmap.go:193] Couldn't get configMap openstack/swift-ring-config-data: configmap "swift-ring-config-data" not found Jan 31 17:23:49 crc kubenswrapper[4730]: E0131 17:23:49.699673 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices podName:62d8ac66-dbb1-4b02-844e-13123934241d nodeName:}" failed. No retries permitted until 2026-01-31 17:25:51.69965499 +0000 UTC m=+3338.505711906 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/62d8ac66-dbb1-4b02-844e-13123934241d-ring-data-devices") pod "swift-ring-rebalance-md2pb" (UID: "62d8ac66-dbb1-4b02-844e-13123934241d") : configmap "swift-ring-config-data" not found Jan 31 17:23:54 crc kubenswrapper[4730]: I0131 17:23:54.472320 4730 scope.go:117] "RemoveContainer" containerID="76687ee13b5143ed454a14a2de3825fe6e5a14d76c7ed16820dc4bdf24c6a6f8" Jan 31 17:23:54 crc kubenswrapper[4730]: E0131 17:23:54.473676 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mzg47_openshift-machine-config-operator(47cbebb1-b682-4013-a2d5-7ca2f47f03e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-mzg47" podUID="47cbebb1-b682-4013-a2d5-7ca2f47f03e6" Jan 31 17:23:55 crc kubenswrapper[4730]: I0131 17:23:55.463753 4730 scope.go:117] "RemoveContainer" containerID="29c95e698af05dd65bee6eee51be964385654f4d4b932bcd12d9d61b0f04be3a" Jan 31 17:23:55 crc kubenswrapper[4730]: I0131 17:23:55.464016 4730 scope.go:117] "RemoveContainer" containerID="d107bcfbe2dcc9849b6f582a0dcd8e914cdff952c654cba5bed4e626acda46f0" Jan 31 17:23:55 crc kubenswrapper[4730]: E0131 17:23:55.464276 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-5867f46d87-f8rf9_openstack(4c3d9aec-6a99-480d-a7f3-0703ac92db04)\"]" pod="openstack/swift-proxy-5867f46d87-f8rf9" podUID="4c3d9aec-6a99-480d-a7f3-0703ac92db04" Jan 31 17:23:59 crc kubenswrapper[4730]: I0131 17:23:59.465482 4730 scope.go:117] "RemoveContainer" containerID="d69412070ea11bc15bd0366754457f7e5d30159bd64d6968d07c5b7ad604f486" Jan 31 17:23:59 crc kubenswrapper[4730]: I0131 17:23:59.465822 4730 scope.go:117] "RemoveContainer" containerID="416834c2bf497d3904504ea512dd67803794a4a15c0d0243ca0271f0332f3a88" Jan 31 17:23:59 crc kubenswrapper[4730]: I0131 17:23:59.465844 4730 scope.go:117] "RemoveContainer" containerID="df9227825afb45b36a3391033a1e59ca57fc5ee9b2a746a96a1b6e3e4f675a95" Jan 31 17:23:59 crc kubenswrapper[4730]: I0131 17:23:59.465907 4730 scope.go:117] "RemoveContainer" containerID="8f78ab4d4a55577afccef8bab480bf50be88440a4c88ec653d629f7958d4e6fc" Jan 31 17:23:59 crc kubenswrapper[4730]: E0131 17:23:59.466182 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_openstack(3656b8f0-e1d3-4214-9c23-dd437a57f2ad)\"]" pod="openstack/swift-storage-0" podUID="3656b8f0-e1d3-4214-9c23-dd437a57f2ad" Jan 31 17:24:03 crc kubenswrapper[4730]: I0131 17:24:03.169838 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-chrmz"] Jan 31 17:24:03 crc kubenswrapper[4730]: E0131 17:24:03.170539 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d88f94-8bbf-4f46-883d-7d370f7b7e33" containerName="gather" Jan 31 17:24:03 crc kubenswrapper[4730]: I0131 17:24:03.170550 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d88f94-8bbf-4f46-883d-7d370f7b7e33" containerName="gather" Jan 31 17:24:03 crc kubenswrapper[4730]: E0131 17:24:03.170569 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f32f08aa-0df5-4400-8e4b-4d8e2346f792" containerName="extract-content" Jan 31 17:24:03 crc kubenswrapper[4730]: I0131 17:24:03.170574 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f32f08aa-0df5-4400-8e4b-4d8e2346f792" containerName="extract-content" Jan 31 17:24:03 crc kubenswrapper[4730]: E0131 17:24:03.170585 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f32f08aa-0df5-4400-8e4b-4d8e2346f792" containerName="extract-utilities" Jan 31 17:24:03 crc kubenswrapper[4730]: I0131 17:24:03.170591 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f32f08aa-0df5-4400-8e4b-4d8e2346f792" containerName="extract-utilities" Jan 31 17:24:03 crc kubenswrapper[4730]: E0131 17:24:03.170613 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f32f08aa-0df5-4400-8e4b-4d8e2346f792" containerName="registry-server" Jan 31 17:24:03 crc kubenswrapper[4730]: I0131 17:24:03.170618 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f32f08aa-0df5-4400-8e4b-4d8e2346f792" containerName="registry-server" Jan 31 17:24:03 crc kubenswrapper[4730]: E0131 17:24:03.170628 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d88f94-8bbf-4f46-883d-7d370f7b7e33" containerName="copy" Jan 31 17:24:03 crc kubenswrapper[4730]: I0131 17:24:03.170633 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d88f94-8bbf-4f46-883d-7d370f7b7e33" containerName="copy" Jan 31 17:24:03 crc kubenswrapper[4730]: I0131 17:24:03.170788 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f32f08aa-0df5-4400-8e4b-4d8e2346f792" containerName="registry-server" Jan 31 17:24:03 crc kubenswrapper[4730]: I0131 17:24:03.170819 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d88f94-8bbf-4f46-883d-7d370f7b7e33" containerName="copy" Jan 31 17:24:03 crc kubenswrapper[4730]: I0131 17:24:03.170841 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d88f94-8bbf-4f46-883d-7d370f7b7e33" containerName="gather" Jan 31 17:24:03 crc kubenswrapper[4730]: I0131 17:24:03.172070 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chrmz" Jan 31 17:24:03 crc kubenswrapper[4730]: I0131 17:24:03.191234 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chrmz"]